CN113658100A - Three-dimensional target object detection method and device, electronic equipment and storage medium - Google Patents
Three-dimensional target object detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113658100A CN113658100A CN202110807099.6A CN202110807099A CN113658100A CN 113658100 A CN113658100 A CN 113658100A CN 202110807099 A CN202110807099 A CN 202110807099A CN 113658100 A CN113658100 A CN 113658100A
- Authority
- CN
- China
- Prior art keywords
- feature
- point cloud
- data
- neural network
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 75
- 238000003062 neural network model Methods 0.000 claims abstract description 95
- 230000007246 mechanism Effects 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 25
- 238000000605 extraction Methods 0.000 claims description 47
- 230000006870 function Effects 0.000 claims description 20
- 238000013528 artificial neural network Methods 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 241000709691 Enterovirus E Species 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 240000004050 Pentaglottis sempervirens Species 0.000 description 2
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- HGYZMIFKJIVTLJ-UHFFFAOYSA-N 4-chloro-2,3-dihydro-1h-pyrrolo[3,2-c]pyridine Chemical compound ClC1=NC=CC2=C1CCN2 HGYZMIFKJIVTLJ-UHFFFAOYSA-N 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a three-dimensional target object detection method, a device, electronic equipment and a storage medium, belonging to the technical field of point cloud data processing, wherein the method comprises the following steps: acquiring three-dimensional point cloud data associated with at least one three-dimensional target object; extracting the characteristics of the three-dimensional point cloud data to obtain original characteristic data, and carrying out position coding on the three-dimensional point cloud data to obtain position coding information; fusing position coding information and the original characteristic data to obtain point cloud characteristics, and inputting the point cloud characteristics into a neural network model based on a self-attention mechanism to obtain first characteristic data output by the neural network model; inputting the first feature data into a feedforward neural network model to obtain second feature data of an output thereof, the second feature data indicating attribute information of the detected at least one three-dimensional target object. The three-dimensional target detection method, the three-dimensional target detection device, the storage medium and the electronic equipment are simple and convenient to realize and have good detection performance.
Description
Technical Field
The invention relates to the technical field of point cloud data processing, in particular to a three-dimensional target object detection method and device, electronic equipment and a storage medium.
Background
With the development of three-dimensional data acquisition technology, the three-dimensional data acquired by the three-dimensional sensor can provide rich geometric, shape and scale information. The three-dimensional data is usually represented by using different formats, for example, the point cloud data output by the laser radar retains the original geometric information and position information of the target object in a three-dimensional space, and does not need any discrete processing, so the point cloud data is the preferred three-dimensional data representation format.
In the current three-dimensional object detection method, the prior art generally adopts a pure point cloud aerial view (BEV) method, that is, a pure point cloud aerial view BEV is generated based on point cloud data, and then a target object is detected by a feature extraction method on the basis of the BEV. However, the above prior art solutions have the following problems:
(1) generating BEVs based on point cloud data may result in data loss, leading to reduced detection performance;
(2) original characteristics of point cloud data, such as rotation invariance, disorder and the like, are not considered;
(3) the anchor (i.e. the candidate box) needs to be set in advance, which causes the subsequent processing to be time-consuming and to affect the detection performance of multiple classes.
Disclosure of Invention
The invention provides a three-dimensional target object detection method, a device, an electronic device and a storage medium, which are used for solving the problems of a pure point cloud aerial view method in the prior art, and realize simple implementation of three-dimensional target object detection and have good detection performance.
The invention provides a three-dimensional target object detection method, which comprises the following steps:
acquiring three-dimensional point cloud data associated with at least one three-dimensional target object;
extracting features of the three-dimensional point cloud data to obtain original feature data, and carrying out position coding on the three-dimensional point cloud data to obtain position coding information;
fusing the position coding information and the original characteristic data to obtain point cloud characteristics, and inputting the point cloud characteristics into a neural network model based on an attention mechanism to obtain first characteristic data output by the neural network model, wherein the first characteristic data is used for representing characteristics of a target object corresponding to the point cloud data;
inputting the first feature data into a feedforward neural network model to obtain second feature data of an output thereof, the second feature data indicating attribute information of the detected at least one three-dimensional target object.
Illustratively, in the three-dimensional target object detection method of the present invention, the three-dimensional point cloud data is point cloud data output by a laser radar, and the attribute information of the three-dimensional target object includes at least a spatial size and position information of the three-dimensional target object.
Illustratively, in the method for detecting a three-dimensional target object of the present invention, the step of extracting features from the three-dimensional point cloud data to obtain raw feature data includes:
taking the full set of the three-dimensional point cloud data as an initially input feature point set to execute multi-stage feature extraction operation, and taking a feature point set output by the last stage of feature extraction operation as the original feature data, wherein the feature point set output by the last stage of feature extraction operation is input by the next stage of feature extraction operation, and each stage of feature extraction operation comprises the following steps:
selecting a plurality of points from a full set of the input feature point set, each of the points defining a centroid of a local area in which the point is located;
for each centroid, constructing a local area point set based on adjacent points near the centroid;
and coding each local region point set to obtain a characteristic point corresponding to each local region point set, wherein all the characteristic points form an output characteristic point set.
Illustratively, in the three-dimensional target object detection method according to the present invention, the step of selecting a plurality of points from the full set of the input feature point set includes:
from the full set of the input feature point set { x ] by using an iterative farthest point sampling algorithm1,x2,...,xnSelect a set of subsets
Illustratively, in the three-dimensional target object detection method according to the present invention, the step of constructing, for each of the centroids, a local region point set based on neighboring points near the centroid includes:
obtaining N ' × Kx (d + C) feature points based on the input feature point set with size of N × (d + C) and the centroid set with size of N ' × d, wherein the feature points form N ' local area point sets;
n represents the number of point cloud data, N' represents the number of local areas, C represents a characteristic dimension, d represents a coordinate dimension, each local area point set corresponds to one local area, and K represents the number of points of adjacent points of each centroid point.
Illustratively, in the method for detecting a three-dimensional target object according to the present invention, the step of encoding each local region point set to obtain a feature point corresponding to each local region point set includes:
obtaining an output feature point set containing N '× (d + C') feature points based on input N 'local area point sets containing N' × K × (d + C) feature points and by encoding a centroid contained in each local area point set and a centroid adjacent point to abstract out a local feature of a local area indicated by the local area point set;
wherein N ' represents the number of local regions, K represents the number of points of adjacent points of each centroid point, d represents a coordinate dimension, C represents a feature dimension, C ' represents a new local feature dimension, and C ' > C.
Illustratively, in the three-dimensional target object detection method of the present invention, the step of performing position coding on the three-dimensional point cloud data to obtain position coding information includes:
acquiring coordinates of a preset key point;
subtracting the coordinates of the preset key points from the position coordinates of each point of the three-dimensional point cloud data to obtain the relative coordinates of each point of the three-dimensional point cloud data;
generating the position-coding information based on the relative coordinates.
Illustratively, in the method for detecting a three-dimensional target object according to the present invention, the step of fusing the position-coding information and the original feature data to obtain a point cloud feature includes:
mapping the coordinates of each point of the three-dimensional point cloud data to the characteristic dimension of the original characteristic data;
and adding the generated position coding information to the original characteristic data through a feedforward fully-connected neural network to obtain the point cloud characteristic.
Illustratively, in the three-dimensional target object detection method of the present invention, the step of inputting the point cloud feature into a neural network model based on a self-attention mechanism to obtain first feature data of an output thereof includes:
inputting the point cloud features to a multi-head self-attention layer of an encoder of the neural network model based on the self-attention mechanism, and processing the point cloud features by a multi-head self-attention function arranged in the multi-head self-attention layer to obtain self-attention features;
inputting the self-attention feature to a decoder of the self-attention mechanism-based neural network model for decoding to output the first feature data;
the neural network model based on the self-attention mechanism comprises an encoder and a decoder, wherein the encoder comprises a plurality of encoder layers, each encoder layer comprises a first sublayer and a second sublayer, each first sublayer is a multi-head self-attention layer, each second sublayer is a feedforward fully-connected neural network, and each encoding layer encodes the point cloud characteristics in parallel.
Exemplarily, in the three-dimensional target object detection method according to the present invention, the step of inputting the point cloud feature into a neural network model based on a self-attention mechanism to obtain first feature data output by the neural network model further includes:
a multi-head attention layer of a decoder of the neural network model based on the self-attention mechanism receives the self-attention feature and the position-encoded information; and
decoding the self-attention feature based on the self-attention feature and the position encoding information to output the first feature data;
the decoder comprises a plurality of decoder layers, each decoder layer comprises a first sublayer, a second sublayer and a third sublayer, the first sublayer is a multi-head self-attention layer, the second sublayer is a multi-head attention layer, the third sublayer is a feedforward full-connection neural network, and each decoder layer decodes the self-attention feature in parallel.
Illustratively, in the three-dimensional target object detection method according to the present invention, the step of inputting the first feature data into a feedforward neural network model to obtain second feature data of an output thereof includes:
determining, from the input first feature data, center coordinates, length, width, height, and angle parameters of a prediction box associated with the first feature data via the feedforward neural network model;
determining a category label of a three-dimensional target object associated with the prediction frame by using a preset function based on the central coordinate, length, width, height and angle parameters of the prediction frame, and outputting the size, position and category label of each prediction frame;
wherein the size, position, and class label of each prediction box constitute the second feature data, and the class label includes a special class label indicating that no three-dimensional target object is detected.
Illustratively, the three-dimensional target object detection method of the present invention further includes: training the first and second neural network models using three-dimensional point cloud data of known three-dimensional target objects before the auto-attention mechanism based neural network model and the feed-forward neural network model perform three-dimensional target object detection.
Illustratively, the three-dimensional target object detection method of the present invention further includes:
before the multi-head attention layer of the decoder based on the self-attention-system neural network model receives the self-attention feature and the position coding information, the multi-head self-attention of the decoder based on the self-attention-system neural network model receives an input preset target parameter, and the preset target parameter is used for limiting the quantity of the second feature data output.
The present invention also provides a three-dimensional target object detection apparatus, including:
the system comprises a feature extraction and position coding module, a feature extraction and position coding module and a position coding and decoding module, wherein the feature extraction and position coding module is used for acquiring three-dimensional point cloud data associated with at least one three-dimensional target object, then performing feature extraction on the three-dimensional point cloud data to obtain original feature data, performing position coding on the three-dimensional point cloud data to obtain position coding information, and fusing the position coding information and the original feature data to obtain point cloud features;
a first processing module, configured to input the point cloud feature into a neural network model based on a self-attention mechanism to obtain first feature data output by the neural network model, where the first feature data is used to represent a feature of a target object corresponding to the point cloud data;
and the second processing module is used for inputting the first characteristic data into the feedforward neural network model to obtain second characteristic data output by the feedforward neural network model, and the second characteristic data indicates the detected attribute information of the at least one three-dimensional target object.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the processor implements any of the steps of the three-dimensional target object detection method described above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any one of the three-dimensional target object detection methods described above.
According to the three-dimensional target object detection method, the device, the electronic equipment and the storage medium, the point cloud is processed through the point cloud feature network, the position information of the point cloud is considered, the point cloud features are coded and decoded through the introduced self-attention network, and finally the detection information of the position and the type of each target object is output through the feed-forward neural network (FFN) to determine the final predicted target, so that the design of anchors (candidate frames) is avoided, and the three-dimensional target object detection method, the device, the electronic equipment and the storage medium are simple and convenient to realize and have good detection performance.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a three-dimensional target object detection method provided by the present invention;
FIG. 2 is a schematic flow chart of feature extraction for point clouds according to the present invention;
FIG. 3 is a schematic diagram of the feature extraction method proposed in the present invention;
fig. 4 is a schematic flowchart of the location encoding of the point cloud provided by the present invention;
FIG. 5 is a schematic flow chart of encoding a point cloud feature provided by the present invention;
FIG. 6 is a schematic flowchart of decoding a point cloud feature according to the present invention;
FIG. 7 is a schematic diagram of a self-attention network provided by the present invention;
FIG. 8 is a schematic diagram of a multi-headed self-attentive force mechanism provided by the present invention;
FIG. 9 is a schematic flow chart illustrating the steps provided by the present invention for inputting first feature data into a second neural network model to obtain second feature data;
FIG. 10 is a schematic structural diagram of a three-dimensional target object detection apparatus provided by the present invention;
FIG. 11 is a schematic diagram of an exemplary three-dimensional target object detection architecture provided by the present invention;
fig. 12 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and in the claims, and in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein.
The three-dimensional target object detection method in the prior art is a method based on pure point cloud (bird's-eye view), such as VoxelNet (voxel 3D detection network), SECOND (voxel 3D detection network using sparse convolution), pointpilars (voxel 3D detection network performing only 2D grid division), and the like, and all of the methods adopt a form of generating the point cloud into the bird's-eye view BEV, then design a deep learning network in the BEV representation form, and finally detect or predict a target.
In order to solve the problems of data loss, low detection performance, no consideration of original characteristics (such as rotation invariance and disorder) of point clouds and the like caused by the adoption of a pure point cloud aerial view method in the prior art, the invention provides a method, a device, electronic equipment and a storage medium for detecting a three-dimensional target object.
The following describes a three-dimensional target object detection method, apparatus, electronic device and storage medium proposed by the present invention with reference to fig. 1 to 12.
Fig. 1 is a schematic flow chart of a three-dimensional target object detection method provided by the present invention, and as shown in fig. 1, the three-dimensional target object detection method includes:
Optionally, the three-dimensional point cloud data is laser radar output point cloud data.
Optionally, the first neural network model is a self-attention mechanism based neural network model.
And 104, inputting the first characteristic data into a second neural network model to obtain second characteristic data output by the second neural network model, wherein the second characteristic data indicates the detected attribute information of the at least one three-dimensional target object.
Optionally, the second neural network model is a feed-forward neural network model.
Optionally, the attribute information of the three-dimensional target object at least includes a spatial size and position information of the three-dimensional target object.
The steps 102 to 104 are described in detail below.
Fig. 2 is a schematic flow chart of feature extraction performed on a point cloud according to the present invention, fig. 3 is a schematic principle diagram of a feature extraction method according to the present invention, and as shown in fig. 2 and fig. 3, in the three-dimensional target object detection method according to the present invention, a pointet + + network model is used to perform feature extraction. The pointent + + network model uses the idea of extracting features hierarchically, and the feature extraction network model is composed of a series of point set extraction layers (set extraction), and each point set extraction layer is composed of three key layers: a sampling layer (sampling layer), a Grouping layer (Grouping layer), and a feature extraction layer (PointNet).
For example, if the input of one point set extraction layer is a feature point set of size N × (d + C) (the feature point set represents a point feature having N points with d-dimensional coordinates and C dimensions), the output of the point set extraction layer is a feature point set of size N '× (d + C'), where the coordinate dimension represented by d is unchanged.
Specifically, in the step 102, the step of performing feature extraction on the three-dimensional point cloud data to obtain original feature data includes:
taking the full set of the three-dimensional point cloud data as an initially input feature point set to execute multi-stage feature extraction operation, and taking a feature point set output by the last stage of feature extraction operation as the original feature data, wherein the feature point set output by the last stage of feature extraction operation is input by the next stage of feature extraction operation, and each stage of feature extraction operation comprises the following steps:
Specifically, from the full set of input feature point sets { x ] by using the iterative farthest point sampling algorithm (FPS)1,x2,...,xnSelect a set of subsetsWherein,is a distance set relative to the remaining point cloudsThe furthest point.
The FPS algorithm is a sampling algorithm, and can ensure uniform sampling of samples, namely, can better cover the whole sampling space.
The principle of the FPS algorithm is: randomly selecting a point, then selecting the point farthest from the point, adding the point into the starting point, and continuing the iteration until the required number is selected.
Specifically, based on the input feature point set of size N × (d + C) and the centroid set of size N ' × (d), N ' × K × (d + C) feature points are obtained, which constitute N ' local area point sets.
N represents the number of point cloud data, N' represents the number of local areas, C represents a characteristic dimension, d represents a coordinate dimension, each local area point set corresponds to one local area, and K represents the number of points of adjacent points of each centroid point.
Optionally, the sampling of the point cloud data is using a centroid as a representative sample, which is calculated as the average of all data points in the same cluster.
And 203, coding each local region point set to obtain a feature point corresponding to each local region point set, wherein all the feature points form an output feature point set.
Specifically, based on input N 'local area point sets including N' × K × (d + C) feature points and by encoding a centroid included in each local area point set and a centroid neighboring point to abstract out a local feature of a local area indicated by the local area point set, an output feature point set including N '× (d + C') feature points is obtained. The above N' corresponds to N1 in fig. 2.
Wherein N 'represents the number of local regions, K represents the number of points of adjacent points of each centroid point, d represents the coordinate dimension, C represents the characteristic dimension, and C' represents the new local characteristic dimension.
Where C 'represents the new local feature dimension (C' > C), each local region is abstracted by its centroid and the local features that encode its neighbors. Based on the above K variation between groups, the feature extraction layer can convert a flexible number of points into a vector of fixed-length local area features. The above C' corresponds to C1 in fig. 2.
Specifically, in step 203, coordinates of points in the K local regions are converted into coordinates relative to a center point of the region, and the coordinates are used as an input of a feature extraction layer PointNet (as shown in fig. 3), so as to obtain local features.
The method for extracting the characteristics as shown in FIG. 3 has the advantages of capability of extracting the characteristics of the disordered point cloud and high efficiency. By extracting more important points from dense point cloud data to serve as the center point of each local area, the center point obtained by each layer is a subset of the center points of the upper layer, the number of the center points is less and less as the number of layers is deeper, but the information contained in each center point is more and more, and therefore information loss caused by sampling of the point cloud data can be prevented. Moreover, the problem of uneven sampling density during point cloud data acquisition can be solved by determining the neighborhood range (namely a local area) of each central point.
Fig. 4 is a schematic flowchart of the location encoding of the point cloud provided by the present invention, as shown in the figure. In the step 102, the step of performing position coding on the three-dimensional point cloud data to obtain position coding information includes:
And 402, subtracting the coordinates of the preset key points from the position coordinates of each point of the three-dimensional point cloud data to obtain the relative coordinates of each point of the three-dimensional point cloud data.
Alternatively, the present invention may map the relative coordinates of each point onto the raw feature data through a feed forward neural network (FFN), and then add the generated position-coding information to the point cloud features.
Further, in step 103, the fusing the position coding information and the original feature data to obtain the point cloud feature includes:
mapping the coordinates of each point of the three-dimensional point cloud data to the characteristic dimension of the original characteristic data;
and adding the generated position coding information to the original characteristic data through a feedforward fully-connected neural network to obtain the point cloud characteristic.
The invention adopts the position coding mode and has the advantages of simplicity and effectiveness. The method comprises the steps of mapping coordinates of each point to characteristic dimensions, adding generated position codes to point cloud characteristics, subtracting the coordinates of key points from all position codes, using relative positions as input of the position codes, and mapping the relative coordinates to the characteristic dimensions by using a simple full-connection network.
Fig. 5 is a schematic flowchart of encoding a point cloud feature provided in the present invention, as shown in fig. 5. In the above step 103, the step of inputting the point cloud feature into a first neural network model to obtain first feature data output by the first neural network model includes:
The neural network model based on the self-attention mechanism comprises an encoder and a decoder, wherein the encoder comprises a plurality of encoder layers, each encoder layer comprises a first sublayer and a second sublayer, each first sublayer is a multi-head self-attention layer, each second sublayer is a feedforward fully-connected neural network, and each encoding layer encodes the point cloud characteristics in parallel.
Optionally, the multi-headed self-attention layer includes a plurality of single-headed self-attention structures, and the self-attention function corresponding to each single-headed self-attention structure can be described as mapping a query (query) and a set of key-value pairs to an output, wherein the query, the key, the value and the output are vectors. Illustratively, the output is calculated as a weighted sum of values, where the weight assigned to each value is calculated by the compatibility function of the query with the corresponding key. A self-attention function corresponding to a single-headed self-attention structure is calculated by:
wherein Q, K and V are the matrix of query, key and value respectively, and dkIs the dimension of a key.
The score of the self-attention is multiplied by each key result through query, a preset function (such as a softmax function) is used for normalizing the score of the self-attention to obtain each score value, then each score value is directly multiplied by the value of the score value, and then the first output value is obtained through summation.
The present invention uses a multi-headed self-attention mechanism since it is more efficient to linearly project queries, keys, and values h times using different linear projections than to use a single self-attention function. On each projected version of the query, key, and value, the self-attention function is executed in parallel, producing dvAnd outputting the value in dimension. They are connected and projected again,resulting in a final value, wherein the multi-headed self-attention function is calculated by:
MuultiHead(Q,K,V)=Concat(head1,...,headh)WO
specifically, a multi-headed self-attentive mechanism is shown in FIG. 8. It can be seen from the figure that V, K, Q is a fixed single value, while there are 3 Linear layers and 3 Scaled Dot-Product attribute, i.e. h equals 3 multi-headed; finally, performing canat (splicing) together, and then converting the Linear layer into an output value which is the same as that of a single head; similar to integration. Multi-headed and single-headed differ in that multiple single heads are duplicated, but the weighting coefficients are not the same; by analogy with a neural network model and a plurality of identical neural network models, different weights are caused due to different initialization, and then the results are integrated.
From the multi-headed self-attention function described above: the input of attention function is changed from original Q, K and VI.e. all 3W are different; changing the original 512 dimensionalities of Q, K and V into 64 dimensionalities (assuming that 8 multiple points are adopted); then spliced together into 512 dimensions, passing through WOPerforming linear conversion; and obtaining a final multi-head self-attention value, namely, calculating the multi-head self-attention value by a plurality of independent attentions.
It should be noted that the multi-headed self-attention layer (self attention) is an operation on a self-input, and the multi-headed self-attention layer (attention) is influenced by other inputs as a weight.
Fig. 7 illustrates the delivery of position-encoded information from the attention layer at each multi-headed of the present invention. Illustratively, point cloud features from the feature extraction network model are encoded by the self-attention encoder, while position encoding information is also added to keys and queries for each multi-headed self-attention layer.
Optionally, each level of each sub-Layer of each multi-headed self-attention Layer employs residual join (skip connect), and then performs feature addition and Layer normalization (Layer Norm) processing, i.e., the output of each sub-Layer is Layer Norm (x + sublayer (x)), where sublayer (x) is the function implemented by the sub-Layer itself.
It should be understood that the idea of residual concatenation is to express the output as a linear superposition of the input and one non-linear transformation of the input. For example, a non-linear variation function is used to describe the input and output of a network, i.e., the input is X, the output is F (X), and F includes operations such as convolution, activation, and the like. When an input is added to the output of the function, the relation of the input and the output can still be described by G (X), but G (X) can be split into linear superposition of F (X) and X.
Among them, four normalization methods are commonly used in deep learning: GroupNorm (GN), LayerNorm (LN), InstanceNorm (IN), and BatchNorm (BN). The neural network model based on the self-attention mechanism adopts a layer normalization LayerNorm (LN) mode.
Alternatively, the neural network model based on the attention-deficit mechanism used in the present invention may also use other normalization methods. The batch normalization BN is the normalization of a single neuron among different data to be processed, the layer normalization LN is the normalization of a single data to be processed on all neurons of a certain layer, namely the layer normalization LN is the normalization of all neurons of an intermediate layer. The advantage of the layer-normalized LN is to make the distribution of input data of each layer in the network relatively stable, speeding up model processing or learning.
Optionally, the calculation procedure of the layer normalized LN includes: firstly, calculating a mean value; then, calculating the variance; then, normalization processing is carried out until the mean value is 0 and the variance is 1; and finally, performing change reconstruction to recover the learned distribution of the layer of network.
Fig. 6 is a schematic flowchart of decoding a point cloud feature provided by the present invention, as shown in fig. 6. In step 103, the step of inputting the point cloud feature into a first neural network model to obtain first feature data output by the first neural network model further includes:
Wherein the preset target parameter is used for limiting the number of output class predictions. According to the input point cloud data, a prediction target is output, and the output prediction targets are many but not necessarily all necessary for a user, so that the number of class predictions output by a feed-forward neural network (FFN) can be limited by setting input preset target parameters.
As shown in fig. 7, the self-attention decoder includes a plurality of decoder layers, each decoder layer includes three sublayers, namely a first sublayer, a second sublayer and a third sublayer, the first sublayer is a multi-head self-attention layer, the second sublayer is a multi-head attention layer, the third sublayer is a feed-forward neural network (FFN), and each decoder layer can decode a plurality of target objects in parallel.
That is, the self-attention decoder is additionally provided with one sub-layer, i.e., a multi-head attention layer, which performs multi-head attention operations on the output of the encoder stack, in addition to two sub-layers identical to those of each encoder layer. Similar to the self-attention encoder, each decoder layer of the self-attention decoder provided by the present invention can decode N target objects in parallel by using residual concatenation at each sub-layer of the self-attention decoder, and then performing feature addition and layer normalization processing.
It should be noted that, in order to make the self-attention encoder and the self-attention decoder have corresponding position encoding information, the position encoding information output from the position encoding module may also be used as the input of the self-attention decoder.
In summary, the present invention employs an encoder based on the self-attention mechanism for encoding to output the first feature data, and a decoder based on the self-attention mechanism for decoding to output the second feature data, and uses the multi-head self-attention mechanism in the encoder and the decoder, so that the training speed of the neural network model based on the self-attention mechanism of the present invention is faster than other neural network models of the prior art.
FIG. 9 is a schematic flow chart of the steps provided by the present invention for inputting the first feature data into the second neural network model to obtain the second feature data, as shown in FIG. 9. In the step 104, the step of inputting the first feature data into the feedforward neural network model to obtain the second feature data output by the feedforward neural network model includes:
Optionally, the feed-forward neural network (FFN) is composed of a three-layer perceptron with hidden layer size d, a ReLU (activation) layer, and a linear projection layer.
Wherein the size, position, and class label of each prediction box constitute the second feature data, and the class label includes a special class label indicating that no three-dimensional target object is detected.
Optionally, the dimension of the prediction box (which is a "prediction 3D box" shown in fig. 1) is defined by the following parameters (x, y, z, w, l, h, θ), where (x, y, z) represents the box center point coordinates and (w, l, h, θ) represents the width, length, height, and angle of the box. That is, the vector is used to represent (x, y, z, w, l, h, θ) and other parameters of the prediction frame corresponding to each target object.
Although the present invention predicts a set of N bounding boxes of fixed size, N is typically much larger than the number of objects of actual interest, and therefore requires additional special class labels to indicate that no objects are detected. This class has a similar effect to the background class in standard target detection methods.
In summary, the feedforward neural network model is adopted to output the position prediction and the category prediction of each target, and finally the final predicted target is determined through the fractional threshold.
Specifically, the three-dimensional target object detection method disclosed by the present invention further includes: training the ego-mechanism-based neural network model and the feed-forward neural network model using three-dimensional point cloud data of known three-dimensional target objects before the ego-mechanism-based neural network model and the feed-forward neural network model perform three-dimensional target object detection. The following describes the three-dimensional target object detection device provided by the present invention, and the three-dimensional target object detection device described below and the three-dimensional target object detection method described above may be referred to in correspondence with each other.
Fig. 10 is a schematic structural diagram of a three-dimensional target object detection apparatus provided by the present invention, as shown in fig. 10. The three-dimensional target object detection apparatus 1000 of the present invention comprises a feature extraction and position coding module 1010, a first processing module 1020, and a second processing module 1030, wherein,
the feature extraction and location coding module 1010 is configured to acquire three-dimensional point cloud data associated with at least one three-dimensional target object, perform feature extraction on the three-dimensional point cloud data to obtain original feature data, perform location coding on the three-dimensional point cloud data to obtain location coding information, and fuse the location coding information and the original feature data to obtain point cloud features.
A first processing module 1020, configured to input the point cloud feature into a first neural network model to obtain first feature data output by the first neural network model.
A second processing module 1030, configured to input the first feature data into a second neural network model to obtain second feature data output by the second neural network model, where the second feature data indicates attribute information of the detected at least one three-dimensional target object.
The three-dimensional target object detection apparatus according to the present invention is described in detail below by way of an example.
Fig. 11 is a schematic diagram of an exemplary three-dimensional target object detection architecture provided by the present invention, as shown in fig. 11. The invention provides a three-dimensional target object detection device, comprising: the device comprises a feature extraction and position coding module, a first processing module and a second processing module.
It should be understood that the representation of three-dimensional data is generally divided into four types: (1) point cloud: the N-dimensional point is composed of N D-dimensional points, and when D is 3, the N-dimensional point generally represents coordinates of (x, y, z), but may also include some other features such as normal vectors, intensities, and the like. (2) Mesh: the triangular patch and the square patch are combined. (3) Voxel volume: the object is characterized by a three-dimensional grid with 0 and 1. (4) An RGB image or an RGB-D image of multiple angles. Since the point cloud is closer to the original representation of the object (e.g., radar scans the object to directly generate the point cloud), and the point cloud is more simply represented, an object is represented by only one N × D matrix.
The point cloud is acquired by acquiring data through a three-dimensional laser radar, or three-dimensional reconstruction is performed through a two-dimensional image, and the point cloud is acquired in the reconstruction process, or the point cloud is calculated and acquired through a three-dimensional model. A point cloud is a data set of points in some coordinate system. The points contain rich information including three-dimensional coordinates X, Y, Z, color, classification value, intensity value, time, and the like.
Optionally, the three-dimensional point cloud data of the present invention is point cloud data output by a laser radar, and the attribute information of the three-dimensional target object at least includes a spatial size and position information of the three-dimensional target object.
The feature extraction and position coding module comprises a feature extraction unit and a position coding unit, and is used for acquiring three-dimensional point cloud data associated with at least one three-dimensional target object, then performing feature extraction on the three-dimensional point cloud data to obtain original feature data, performing position coding on the three-dimensional point cloud data to obtain position coding information, and fusing the position coding information and the original feature data to obtain point cloud features.
The first processing module comprises a self-attention encoder and a self-attention decoder and is used for inputting the point cloud features into a first neural network model to obtain first feature data output by the first neural network model.
Optionally, the first neural network model is a self-attention mechanism based neural network model. Specifically, the first neural network model comprises an encoder and a decoder. The encoder is used for processing the point cloud characteristics to obtain the self-attention characteristics. The input of the decoder comprises two parts, one part is the self-attention feature of the output of the encoder; another part is a preset target parameter that defines the number of class predictions output.
Further, the decoder outputs a decoding feature (i.e., the first feature data) of each target object according to the self-attention feature and the input preset target parameter.
Wherein the second processing module comprises a feedforward fully-connected neural network for inputting the first feature data into a second neural network model to obtain second feature data output by the second neural network model, and the second feature data indicates the detected attribute information of the at least one three-dimensional target object.
Optionally, the second neural network model is a feed-forward neural network model.
The second neural network model outputs the position prediction and the class prediction of each target object through a feed forward neural network (FFN) according to the decoded feature of each target object (i.e., the first feature data), and then determines the final predicted target (i.e., the second feature data, such as the object shown in "predicted 3D box" in fig. 1) through a preset threshold. The preset threshold (or confidence) is used for filtering the target, and different confidence outputs are different, for example, a low confidence may be set for filtering the target.
Alternatively, a classifier of a feed forward neural network (FFN) can give the probability of predicting a class, and whether the result of the classifier itself is reliable is evaluated in terms of confidence. Assuming that there are multiple classifiers for decision fusion, the result (which may be a probability or a class label) given by each classifier needs to be weighted by confidence. And then fusing the weighted single classifier results through different decision criteria (such as DS, LOP and LOGP) to give a final classification detection result.
The second Neural Network model may be a traditional feed Forward Neural Network (FNN) model, or may be a Neural Network model with other structures, which is not limited in the present invention.
Other aspects of the three-dimensional target object detection device provided by the invention are the same as or similar to the three-dimensional target object detection method described above, and are not repeated herein.
In summary, the three-dimensional target object detection device provided by the invention has a simple structure and can effectively improve the performance of three-dimensional target detection.
As shown in fig. 12, the present invention also proposes an electronic device, including: a processor (processor)1210, a communication Interface (Communications Interface)1220, a memory (memory)1230, and a communication bus 1240, wherein the processor 1210, the communication Interface 1220, and the memory 1230 communicate with each other via the communication bus 1240. Processor 1210 may invoke logic instructions in memory 1230 to perform any of the three-dimensional target object detection methods described previously.
Illustratively, the logic instructions in the memory 1230 described above may be implemented in the form of software functional units, and when sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the three-dimensional target object detection methods as described above.
In summary, the three-dimensional target object detection method, the three-dimensional target object detection device, the electronic device and the storage medium provided by the invention can directly process point cloud data, do not need to design any additional candidate frame, are suitable for predicting targets with various scales, and can effectively improve the detection performance of the three-dimensional target by using the attention network to perform three-dimensional target detection. In addition, the invention is based on three-dimensional detection of a self-attention network, does not have complex data processing operation, and can realize the detection of the target object based on a traditional feed-forward neural network (FFN).
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (15)
1. A method for detecting a three-dimensional target object, comprising:
acquiring three-dimensional point cloud data associated with at least one three-dimensional target object;
extracting features of the three-dimensional point cloud data to obtain original feature data, and carrying out position coding on the three-dimensional point cloud data to obtain position coding information;
fusing the position coding information and the original characteristic data to obtain point cloud characteristics, and inputting the point cloud characteristics into a neural network model based on an attention mechanism to obtain first characteristic data output by the neural network model, wherein the first characteristic data is used for representing characteristics of a target object corresponding to the point cloud data;
inputting the first feature data into a feedforward neural network model to obtain second feature data of an output thereof, the second feature data indicating attribute information of the detected at least one three-dimensional target object.
2. The method of claim 1, wherein the step of extracting the features of the three-dimensional point cloud data to obtain raw feature data comprises:
taking the full set of the three-dimensional point cloud data as an initially input feature point set to execute multi-stage feature extraction operation, and taking a feature point set output by the last stage of feature extraction operation as the original feature data, wherein the feature point set output by the last stage of feature extraction operation is input by the next stage of feature extraction operation, and each stage of feature extraction operation comprises the following steps:
selecting a plurality of points from a full set of the input feature point set, each of the points defining a centroid of a local area in which the point is located;
for each centroid, constructing a local area point set based on adjacent points near the centroid;
and coding each local region point set to obtain a characteristic point corresponding to each local region point set, wherein all the characteristic points form an output characteristic point set.
3. The three-dimensional target object detection method according to claim 2, wherein the step of selecting a plurality of points from the full set of the input feature point set includes:
from the full set of the input feature point set { x ] by using an iterative farthest point sampling algorithm1,x2,...,xnSelect a set of subsets
4. The method according to claim 2, wherein the step of constructing, for each of the centroids, a local region point set based on neighboring points near the centroid comprises:
obtaining N ' × Kx (d + C) feature points based on the input feature point set with size of N × (d + C) and the centroid set with size of N ' × d, wherein the feature points form N ' local area point sets;
n represents the number of point cloud data, N' represents the number of local areas, C represents a characteristic dimension, d represents a coordinate dimension, each local area point set corresponds to one local area, and K represents the number of points of adjacent points of each centroid point.
5. The method according to claim 2, wherein the step of encoding each of the local region point sets to obtain the feature points corresponding to each of the local region point sets comprises:
obtaining an output feature point set containing N '× (d + C') feature points based on input N 'local area point sets containing N' × K × (d + C) feature points and by encoding a centroid contained in each local area point set and a centroid adjacent point to abstract out a local feature of a local area indicated by the local area point set;
wherein N ' represents the number of local regions, K represents the number of points of adjacent points of each centroid point, d represents a coordinate dimension, C represents a feature dimension, C ' represents a new local feature dimension, and C ' > C.
6. The method according to claim 1, wherein the step of performing position coding on the three-dimensional point cloud data to obtain position coding information comprises:
acquiring coordinates of a preset key point;
subtracting the coordinates of the preset key points from the position coordinates of each point of the three-dimensional point cloud data to obtain the relative coordinates of each point of the three-dimensional point cloud data;
generating the position-coding information based on the relative coordinates.
7. The method of claim 1, wherein the step of fusing the position-coded information and the raw feature data to obtain the point cloud feature comprises:
mapping the coordinates of each point of the three-dimensional point cloud data to the characteristic dimension of the original characteristic data;
and adding the generated position coding information to the original characteristic data through a feedforward fully-connected neural network to obtain the point cloud characteristic.
8. The method of claim 2, wherein the step of inputting the point cloud features into a neural network model based on a self-attention mechanism to obtain first feature data of an output thereof comprises:
inputting the point cloud features to a multi-head self-attention layer of an encoder of the neural network model based on the self-attention mechanism, and processing the point cloud features by a multi-head self-attention function arranged in the multi-head self-attention layer to obtain self-attention features;
inputting the self-attention feature to a decoder of the self-attention mechanism-based neural network model for decoding to output the first feature data;
the neural network model based on the self-attention mechanism comprises an encoder and a decoder, wherein the encoder comprises a plurality of encoder layers, each encoder layer comprises a first sublayer and a second sublayer, each first sublayer is a multi-head self-attention layer, each second sublayer is a feedforward fully-connected neural network, and each encoding layer encodes the point cloud characteristics in parallel.
9. The method of claim 8, wherein the step of inputting the point cloud features into a neural network model based on a self-attention mechanism to obtain first feature data of an output thereof further comprises:
a multi-head attention layer of a decoder of the neural network model based on the self-attention mechanism receives the self-attention feature and the position-encoded information; and
decoding the self-attention feature based on the self-attention feature and the position encoding information to output the first feature data;
the decoder comprises a plurality of decoder layers, each decoder layer comprises a first sublayer, a second sublayer and a third sublayer, the first sublayer is a multi-head self-attention layer, the second sublayer is a multi-head attention layer, the third sublayer is a feedforward full-connection neural network, and each decoder layer decodes the self-attention feature in parallel.
10. The three-dimensional target object detection method according to claim 1, wherein the step of inputting the first feature data into a feedforward neural network model to obtain second feature data of an output thereof includes:
determining, from the input first feature data, center coordinates, length, width, height, and angle parameters of a prediction box associated with the first feature data via the feedforward neural network model;
determining a category label of a three-dimensional target object associated with the prediction frame by using a preset function based on the central coordinate, length, width, height and angle parameters of the prediction frame, and outputting the size, position and category label of each prediction frame;
wherein the size, position, and class label of each prediction box constitute the second feature data, and the class label includes a special class label indicating that no three-dimensional target object is detected.
11. The three-dimensional target object detection method according to claim 1, characterized in that the method further comprises:
training the ego-mechanism-based neural network model and the feed-forward neural network model using three-dimensional point cloud data of known three-dimensional target objects before the ego-mechanism-based neural network model and the feed-forward neural network model perform three-dimensional target object detection.
12. The three-dimensional target object detection method according to claim 8, characterized in that the method further comprises:
before the multi-head attention layer of the decoder based on the self-attention-system neural network model receives the self-attention feature and the position coding information, the multi-head self-attention layer of the decoder based on the self-attention-system neural network model receives an input preset target parameter, and the preset target parameter is used for limiting the quantity of the output second feature data.
13. A three-dimensional target object detection apparatus, comprising:
the system comprises a feature extraction and position coding module, a feature extraction and position coding module and a position coding and decoding module, wherein the feature extraction and position coding module is used for acquiring three-dimensional point cloud data associated with at least one three-dimensional target object, then performing feature extraction on the three-dimensional point cloud data to obtain original feature data, performing position coding on the three-dimensional point cloud data to obtain position coding information, and fusing the position coding information and the original feature data to obtain point cloud features;
a first processing module, configured to input the point cloud feature into a neural network model based on a self-attention mechanism to obtain first feature data output by the neural network model, where the first feature data is used to represent a feature of a target corresponding to the point cloud data;
and the second processing module is used for inputting the first characteristic data into the feedforward neural network model to obtain second characteristic data output by the feedforward neural network model, and the second characteristic data indicates the detected attribute information of the at least one three-dimensional target object.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the three-dimensional target object detection method according to any one of claims 1 to 12 when executing the program.
15. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the three-dimensional target object detection method according to any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110807099.6A CN113658100B (en) | 2021-07-16 | 2021-07-16 | Three-dimensional target object detection method, three-dimensional target object detection device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110807099.6A CN113658100B (en) | 2021-07-16 | 2021-07-16 | Three-dimensional target object detection method, three-dimensional target object detection device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113658100A true CN113658100A (en) | 2021-11-16 |
CN113658100B CN113658100B (en) | 2024-09-06 |
Family
ID=78489534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110807099.6A Active CN113658100B (en) | 2021-07-16 | 2021-07-16 | Three-dimensional target object detection method, three-dimensional target object detection device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113658100B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114005110A (en) * | 2021-12-30 | 2022-02-01 | 智道网联科技(北京)有限公司 | 3D detection model training method and device, and 3D detection method and device |
CN114549315A (en) * | 2022-02-21 | 2022-05-27 | 清华大学 | Object point cloud generation method and device for self-supervision structure decoupling |
CN114549608A (en) * | 2022-04-22 | 2022-05-27 | 季华实验室 | Point cloud fusion method and device, electronic equipment and storage medium |
CN114612895A (en) * | 2022-03-18 | 2022-06-10 | 上海伯镭智能科技有限公司 | Road detection method and device in non-standard road scene |
CN114627170A (en) * | 2022-03-11 | 2022-06-14 | 平安科技(深圳)有限公司 | Three-dimensional point cloud registration method and device, computer equipment and storage medium |
CN114663619A (en) * | 2022-02-24 | 2022-06-24 | 清华大学 | Three-dimensional point cloud object prediction method and device based on self-attention mechanism |
CN114882024A (en) * | 2022-07-07 | 2022-08-09 | 深圳市信润富联数字科技有限公司 | Target object defect detection method and device, electronic equipment and storage medium |
CN115588187A (en) * | 2022-12-13 | 2023-01-10 | 华南师范大学 | Pedestrian detection method, device and equipment based on three-dimensional point cloud and storage medium |
CN115661812A (en) * | 2022-11-14 | 2023-01-31 | 苏州挚途科技有限公司 | Target detection method, target detection device and electronic equipment |
CN115829898A (en) * | 2023-02-24 | 2023-03-21 | 北京百度网讯科技有限公司 | Data processing method, data processing device, electronic device, medium, and autonomous vehicle |
WO2023216460A1 (en) * | 2022-05-09 | 2023-11-16 | 合众新能源汽车股份有限公司 | Aerial view-based multi-view 3d object detection method, memory and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111815776A (en) * | 2020-02-04 | 2020-10-23 | 山东水利技师学院 | Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images |
CN112633378A (en) * | 2020-12-24 | 2021-04-09 | 电子科技大学 | Intelligent detection method and system for multimodal image fetus corpus callosum |
-
2021
- 2021-07-16 CN CN202110807099.6A patent/CN113658100B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111815776A (en) * | 2020-02-04 | 2020-10-23 | 山东水利技师学院 | Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images |
CN112633378A (en) * | 2020-12-24 | 2021-04-09 | 电子科技大学 | Intelligent detection method and system for multimodal image fetus corpus callosum |
Non-Patent Citations (4)
Title |
---|
MENG-HAO GUO ET AL.: "PCT: Point Cloud Transformer", 《ARXIV》 * |
MINGTAO FENG ET AL.: "Point Attention Network for Semantic Segmentation of 3D Point Clouds", 《ARXIV》 * |
万鹏: "基于F-PointNet的3D点云数据目标检测", 《 山东大学学报(工学版)》, pages 120 - 126 * |
知骤: "搞懂PointNet++,这篇文章就够了!", Retrieved from the Internet <URL:Http://zhuanlan.zhihu.com/p/266324173> * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114005110B (en) * | 2021-12-30 | 2022-05-17 | 智道网联科技(北京)有限公司 | 3D detection model training method and device, and 3D detection method and device |
CN114005110A (en) * | 2021-12-30 | 2022-02-01 | 智道网联科技(北京)有限公司 | 3D detection model training method and device, and 3D detection method and device |
CN114549315A (en) * | 2022-02-21 | 2022-05-27 | 清华大学 | Object point cloud generation method and device for self-supervision structure decoupling |
CN114663619A (en) * | 2022-02-24 | 2022-06-24 | 清华大学 | Three-dimensional point cloud object prediction method and device based on self-attention mechanism |
CN114627170B (en) * | 2022-03-11 | 2024-06-07 | 平安科技(深圳)有限公司 | Three-dimensional point cloud registration method, three-dimensional point cloud registration device, computer equipment and storage medium |
CN114627170A (en) * | 2022-03-11 | 2022-06-14 | 平安科技(深圳)有限公司 | Three-dimensional point cloud registration method and device, computer equipment and storage medium |
CN114612895A (en) * | 2022-03-18 | 2022-06-10 | 上海伯镭智能科技有限公司 | Road detection method and device in non-standard road scene |
CN114549608A (en) * | 2022-04-22 | 2022-05-27 | 季华实验室 | Point cloud fusion method and device, electronic equipment and storage medium |
WO2023216460A1 (en) * | 2022-05-09 | 2023-11-16 | 合众新能源汽车股份有限公司 | Aerial view-based multi-view 3d object detection method, memory and system |
CN114882024A (en) * | 2022-07-07 | 2022-08-09 | 深圳市信润富联数字科技有限公司 | Target object defect detection method and device, electronic equipment and storage medium |
CN115661812A (en) * | 2022-11-14 | 2023-01-31 | 苏州挚途科技有限公司 | Target detection method, target detection device and electronic equipment |
CN115588187A (en) * | 2022-12-13 | 2023-01-10 | 华南师范大学 | Pedestrian detection method, device and equipment based on three-dimensional point cloud and storage medium |
CN115829898A (en) * | 2023-02-24 | 2023-03-21 | 北京百度网讯科技有限公司 | Data processing method, data processing device, electronic device, medium, and autonomous vehicle |
CN115829898B (en) * | 2023-02-24 | 2023-06-02 | 北京百度网讯科技有限公司 | Data processing method, device, electronic equipment, medium and automatic driving vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN113658100B (en) | 2024-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113658100B (en) | Three-dimensional target object detection method, three-dimensional target object detection device, electronic equipment and storage medium | |
Dizaji et al. | Unsupervised deep generative adversarial hashing network | |
CN107209873B (en) | Hyper-parameter selection for deep convolutional networks | |
US11816841B2 (en) | Method and system for graph-based panoptic segmentation | |
CN112949647B (en) | Three-dimensional scene description method and device, electronic equipment and storage medium | |
CN113850270B (en) | Semantic scene completion method and system based on point cloud-voxel aggregation network model | |
JP5591178B2 (en) | Method for classifying objects in test images | |
CN110188827B (en) | Scene recognition method based on convolutional neural network and recursive automatic encoder model | |
JP2011008631A (en) | Image conversion method and device, and pattern identification method and device | |
CN114386534A (en) | Image augmentation model training method and image classification method based on variational self-encoder and countermeasure generation network | |
CN117746260B (en) | Remote sensing data intelligent analysis method and system | |
CN110111365B (en) | Training method and device based on deep learning and target tracking method and device | |
CN111428758A (en) | Improved remote sensing image scene classification method based on unsupervised characterization learning | |
CN109711442B (en) | Unsupervised layer-by-layer generation confrontation feature representation learning method | |
CN118351320B (en) | Instance segmentation method based on three-dimensional point cloud | |
CN112819171A (en) | Data searching method and system based on table function and computer storage medium | |
Ahmad et al. | 3D capsule networks for object classification from 3D model data | |
CN117351550A (en) | Grid self-attention facial expression recognition method based on supervised contrast learning | |
CN112163114A (en) | Image retrieval method based on feature fusion | |
CN118411714A (en) | Image texture classification method and system | |
CN117671666A (en) | Target identification method based on self-adaptive graph convolution neural network | |
CN116912486A (en) | Target segmentation method based on edge convolution and multidimensional feature fusion and electronic device | |
Darma et al. | GFF-CARVING: Graph Feature Fusion for the Recognition of Highly Varying and Complex Balinese Carving Motifs | |
Li et al. | Genetic algorithm optimized SVM in object-based classification of quickbird imagery | |
Li | Special character recognition using deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |