CN111612059A - Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars - Google Patents
Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars Download PDFInfo
- Publication number
- CN111612059A CN111612059A CN202010425656.3A CN202010425656A CN111612059A CN 111612059 A CN111612059 A CN 111612059A CN 202010425656 A CN202010425656 A CN 202010425656A CN 111612059 A CN111612059 A CN 111612059A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- plane
- coordinate
- feature
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computer vision, and particularly discloses a construction method of a multi-plane coding point cloud feature deep learning model based on pointpilars. The construction method comprises the following steps: and acquiring a training sample, and training the multi-plane coding point cloud characteristic deep learning model by adopting the training sample, so that the point cloud data in the training sample is input into the trained multi-plane coding point cloud characteristic deep learning model to obtain a recognition result, namely the position boundary box coordinates of the detection target in the point cloud data and the existence probability of the target in the boundary box coordinates. The multi-plane coding point cloud feature deep learning model constructed by the invention can realize three-dimensional space sampling of point cloud data, and can perform learning fusion on the point cloud features of the support columns in three planes obtained by sampling, thereby solving the problem of space information loss of the existing point cloud sampling, better reducing the loss of detection precision caused by different angles of the point cloud in each direction in space, and having good robustness and high detection accuracy.
Description
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a construction method of a multi-plane coding point cloud feature deep learning model based on pointpilars.
Background
The target detection is an important task of computer vision, and aims to identify the type of a target and position the target, for traditional two-dimensional target detection, the field of computer vision is mature at present, and because the two-dimensional target detection aims at an image level and only contains plane information of an object, the target detection focuses on three-dimensional information of the object more and more along with the rapid development of the automatic driving industry, the three-dimensional target detection technology based on deep learning is also rapidly developed, and the current three-dimensional target detection technology mainly relies on images and laser radar point clouds for environmental perception. Based on the two data, the information of the spatial structure of the object can be extracted, including the posture, the size, the motion direction, the shape and the like of the object. The object identification from the laser radar point cloud data is the core problem of the current three-dimensional target detection, and the point cloud data is sparse, disordered and unstructured, and the identification difficulty is high in an extreme environment, so that the laser radar point cloud data is an open problem for the three-dimensional target detection.
In recent years, scholars at home and abroad propose various three-dimensional target detection algorithms, which are mainly applied to a point cloud method acquired by using a laser radar sensor under an unmanned scene, and the point cloud method comprises the steps of fusing bird's view images and 2D image images after being respectively operated by 2DCNN (two-dimensional scanning and noise reduction) such as AVOD (automatic voltage and optical density); converting the point cloud into standard 3D voxels as voxelization of the three-dimensional scene; the feature is learned by using three-dimensional convolution, but the problem of too large calculation amount and slow speed exists. A novel coding mode is proposed by a pointpilars article, the pointnet is used for learning the vertical column representation method pilars of the point cloud, and the mature 2D convolution frame can be used for learning through coding characteristics, so that the speed is higher, the calculation power is smaller, the speed can reach 62Hz, and the quick version can reach 105 Hz.
MV3D is also a Multi-view (Multi-view) 3D object recognition network proposed by scholars, using Multi-modal data as input and predicting targets in 3D space, using RGB images, radar birds views, radar front views as input to the network: and realizing accurate automobile identification and 3D frame regression.
The method is also a three-dimensional target detection method proposed by students based on monocular, binocular and depth camera vision. For the detection of an indoor scene, firstly, the indoor scene has a small scale, a long-distance target in an outdoor scene does not appear, the variety is more diversified, and therefore richer input information is needed, so that the method based on the binocular/Depth camera is more suitable, a Depth Map channel is added, the image or the image channel refers to information related to the distance of the scene object surface of a viewpoint, the pixel value is the actual distance between a sensor and an object, and a method for fusing multiple features such as image texture features and Depth features is adopted, for example, algorithms such as DepthRCNN, AD3D and 2D-drive, but the detection effect is improved in the effectiveness of a 2D target detection model. And the indoor scene is complicated, and little target is many, has many objects to shelter from, often can influence the detection precision. In 2012, Fidler et al extended DPM to three-dimensional object detection under monocular vision, expressed each object class as a deformable three-dimensional cuboid, and effectively realized three-dimensional detection of some indoor objects with obvious shape characteristics, such as obvious cuboid characteristic objects of beds, tables, and the like, through a transformation relationship between an object part and the surface of a three-dimensional detection frame. In order to improve the detection accuracy of multiple targets in an indoor scene, Zhuo et al propose an end-to-end monocular vision-based three-dimensional target detection network combining a depth estimation network and a 3D RPN.
For the three-dimensional target detection of outdoor scenes, the three-dimensional geometric information of the target can be regressed by combining methods such as prior information fusion, geometric characteristics, three-dimensional model matching, depth estimation network under monocular vision and the like based on a monocular vision sensor. Chen et al proposed a Mono3D target detection method in 2016, but when a 3D detection frame is extracted using complex prior information, the problem of error accumulation in energy loss calculation is present, so that the detection performance is not outstanding. There is a gap in comparison to 2D detectors and end-to-end training is not possible. Mouswaian et al propose a 3D target detection method of Deep3Dbbox using the learning experience of a 2D target detector network. The method expands a 2D target detector network, and obtains the three-dimensional size and the course angle of the target by utilizing a regression method. The computing power is greatly reduced, and the computing speed is improved. But there is no substantial improvement in detection accuracy due to the lack of depth information. The method aims at the three-dimensional target detection algorithm of monocular vision, which has the defects of small size, high positioning accuracy of an occluded target and the like, and the estimation deviation of depth information is the main reason of low detection accuracy, especially for the positioning of a long-distance and occluded target. The binocular/depth camera relies on the advantage of accurate depth information, and has obvious detection precision improvement compared with a monocular vision algorithm especially aiming at target detection and positioning tasks in the application of the vision algorithm of a three-dimensional space.
In recent years, with the development of deep learning and artificial intelligence, more and more people apply the technology to various fields. The application scenes in the fields of unmanned driving and the like are complex and changeable, the traditional two-dimensional target detection algorithm has obvious limitations, the accuracy and the precision of detection are improved, and the safety of a driver is guaranteed, so that the precision and the speed of three-dimensional target detection are greatly challenged, but the unmanned scene is spacious, the point cloud is not uniform when a laser radar collects the point cloud, collected remote points are very sparse, and the method for deep learning by utilizing the space point cloud needs to contain complete space information.
Disclosure of Invention
Aiming at the problems and the defects in the prior art, the invention aims to provide a construction method of a multi-plane coding point cloud feature deep learning model based on pointpilars.
In order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
the invention firstly provides a construction method of a multi-plane coding point cloud characteristic deep learning model based on pointpilars, which comprises the following steps: acquiring a training sample, wherein the training sample comprises point cloud data containing a detection target and marking information corresponding to the point cloud data, and the marking information is used for indicating a boundary box coordinate of the detection target in the point cloud data and a classification label of the detection target in the boundary box coordinate; and training the multi-plane coding point cloud characteristic deep learning model by adopting the training sample, so that the point cloud data in the training sample is input into the trained multi-plane coding point cloud characteristic deep learning model to obtain a recognition result, namely the position boundary box coordinates of the detection target in the point cloud data and the existence probability of the target in the boundary box coordinates.
According to the construction method, preferably, the multi-plane coding point cloud feature deep learning model is an improvement based on a pointpilars algorithm, and the specific improvement is as follows: a multi-plane fusion feature coding network is adopted to replace a feature coder network in the pointpilars algorithm; the multi-plane coding point cloud feature deep learning model consists of a multi-plane fusion feature coding network, a backhaul network and a protection Head network, wherein the backhaul network and the protection Head network are both the original backhaul network and the protection Head network in the pointpilers algorithm, and the network structures are unchanged. The input of the multi-plane fusion feature coding network is point cloud data, and the output is a sparse pseudo image converted from the fusion features of the point cloud; the input of the Backbone network is a sparse pseudo image, and the output of the Backbone network is a convolution characteristic diagram of the sparse pseudo image; the input of the Detection Head network is a convolution characteristic diagram output by the backhaul network, and the output is a prediction boundary box coordinate of a Detection target in the point cloud data and the probability of the target existing in the prediction boundary box.
According to the construction method, preferably, the specific steps of training the multi-plane coding point cloud feature deep learning model by using the training samples are as follows:
(1) inputting a training sample into a multi-plane fusion feature coding network, and performing fusion coding on the features of point cloud data x-y plane, x-z plane and y-z plane in the training sample by the multi-plane fusion feature coding network to obtain the fusion feature of point cloud of the x-y plane and convert the fusion feature of the point cloud into a sparse pseudo image;
(2) inputting the sparse pseudo image into a backhaul network for feature extraction to obtain a convolution feature map of the sparse pseudo image;
(3) inputting the convolution characteristic diagram of the sparse pseudo image into a Detection Head network to obtain the coordinates of a prediction boundary box of a detected target in the point cloud data and the probability of the target existing in the prediction boundary box;
(4) and (4) taking the predicted boundary frame coordinate obtained in the step (3) as a predicted result, taking the boundary frame coordinate marked in the training sample as a real result, constructing a loss function according to the predicted result and the real result, adopting a square error loss function as the loss function, optimizing network parameters of the multi-plane coding point cloud feature deep learning model through a random gradient descent algorithm, reducing the numerical value of the loss function, continuously iterating the process to optimize the network parameters until the loss function stops descending, finishing the training process of the multi-plane coding point cloud feature deep learning model, and obtaining the trained multi-plane coding point cloud feature deep learning model.
According to the above construction method, preferably, the specific operation of step (1) is:
(1a) dispersing point cloud data in a training sample on a grid uniformly spaced on an x-y plane, without limitation in the z direction, creating a series of struts on the x-y plane, wherein the point cloud contained in each strut is represented by r, xc,yc,zc,xp,ypFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,xp,yp) The expanded point cloud characteristic dimension D is 9; wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; x is the number ofp,ypRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane;
(1b) on an x-y plane, the number of point clouds contained in all non-empty struts is adjusted to be consistent, then a dense tensor (D, P, N) is created according to the number of the non-empty struts on the plane, the number of the point clouds contained in the non-empty struts and the characteristics of the point clouds in the non-empty struts, and the characteristics (D, P, N) of each non-empty strut on the x-y plane are obtained, wherein D represents the characteristic dimension of the point clouds in the non-empty struts, P represents the number of the non-empty struts on the x-y plane, and N represents the number of the point clouds contained in the non-empty struts;
(1c) performing feature learning on the features (D, P, N) of the non-empty pillars on the x-y plane by adopting a PointNet network, and obtaining final features (C, P, N) of the point cloud in each non-empty pillar on the x-y plane after learning; c represents a new characteristic dimension obtained after the point cloud is learned through a PointNet network, P represents the number of non-empty struts on an x-y plane, and N represents the number of the point clouds contained in the non-empty struts;
(1d) respectively dispersing point cloud data in a training sample on grids uniformly spaced on an x-z plane, without limitation in the y direction, creating a series of struts on the x-z plane, wherein the point cloud contained in each strut is r, xc,yc,zc,xp,zpFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,xp,zp) (ii) a Wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; x is the number ofp,zpRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane; then, according to the operations of the steps (1b) to (1C), obtaining the final point cloud characteristics (C, P, N) of each non-empty pillar on the x-z plane;
(1e) respectively dispersing point cloud data in a training sample on grids uniformly spaced on a y-z plane, without limitation in the x direction, creating a series of struts on the y-z plane, wherein the point cloud contained in each strut is r, xc,yc,zc,yp,zpFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,yp,zp) (ii) a Wherein, the ratio of x,y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; y isp,zpRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane; then, according to the operations of the steps (1b) to (1C), obtaining the final point cloud characteristics (C, P, N) of each non-empty pillar on the y-z plane;
(1f) the method comprises the steps of superposing final characteristics of point clouds of all non-empty pillars on an x-z plane and a y-z plane with final characteristics of the point clouds in all the non-empty pillars on the x-y plane to obtain fused characteristics (3C, P and N) of the point clouds on the x-y plane, processing the fused characteristics (3C, P and N) by adopting a maximum pooling operation to obtain tensors (3C and P), and then creating sparse pseudo images (3C, H and W) according to the tensors (3C and P), wherein H represents the height of the sparse pseudo images, and W represents the width of the sparse pseudo images.
According to the above construction method, the size of the x-z plane and the y-z plane is preferably the same as the x-y plane.
According to the construction method, preferably, a backhaul network is adopted in the step (2) to perform feature extraction on the sparse pseudo-image, a convolution kernel traverses the whole sparse pseudo-image from left to right and from top to bottom, and the dimensionality of a feature map output after each layer of convolution of the input sparse pseudo-image is as follows:
W2=(W1-F+2P)/S+1(I)
H2=(H1-F+2P)/S+1(II)
D2=K(III)
wherein, W1,H1Inputting the width, height and depth of the characteristic diagram before the convolution layer; w2,H2,D2Respectively the width, height and depth of the convolved output characteristic diagram; k is the number of convolution kernels; f is the convolution kernel size of the convolution layer; p is the zero filling quantity of the convolution layer input characteristic diagram; s is the step length.
According to the above construction method, preferably, the specific operation of step (3) is:
(3a) convolution of sparse pseudo-imageInputting the characteristic diagram into a Detection Head network, and finding out the central coordinate of each position characteristic of the convolution characteristic diagram on an x-y sampling plane according to the mapping relation of the receptive field; setting 3D preset frames according to central coordinates in an x-y sampling plane, setting two 3D preset frames with different angles at each central coordinate, wherein the size of each 3D preset frame is the same as the average size of a detection target boundary frame marked in a training sample, then calculating IoU after the 3D preset frames and the marked detection target boundary frames are projected on the x-y plane, comparing IoU obtained by calculation with a set threshold value, and screening out 3D candidate frames from the 3D preset frames; wherein IoU is greater than the set threshold, the 3D default frame is the 3D candidate frame, and the initial position coordinates of the 3D candidate frame are (G)x,Gy,Gz,Gw,Gh,Gl,Gθ);
(3b) Performing frame regression on the 3D candidate frame obtained by screening in the step (3a) to obtain the coordinate correction offset (D) of the 3D candidate framex,dy,dz,dw,dh,dl,dθ) Calculating according to the initial position coordinates of the 3D candidate frame and the coordinate correction offset of the candidate frame obtained by frame regression to obtain the position coordinates (R) of the predicted boundary frame of the detection targetx,Ry,Rz,Rw,Rh,Rl,Rθ) And outputting and simultaneously outputting the probability of the detection target existing in the prediction boundary box.
According to the above construction method, preferably, the angles of the two 3D preset frames set per center coordinate in step (3a) are 0 degree and 90 degrees, respectively.
According to the above construction method, preferably, the specific calculation process of the position coordinates of the prediction bounding box of the detection target in step (3c) is as follows: calculating the position coordinates of the predicted boundary frame according to formulas (IV) to (X) according to the initial position coordinates of the 3D candidate frame and the coordinate correction offset of the candidate frame output by the frame regression network;
Rx=Gx×dx+Gx (Ⅳ)
Ry=Gy×dy+Gy (Ⅴ)
Rz=Gz×dz+Gy (Ⅵ)
Rw=Gw×edw(Ⅶ)
Rh=Gh×edh(Ⅷ)
Rl=Gl×edl(Ⅸ)
Rθ=Gθ×dθ+Gθ(Ⅹ)
wherein G isxAs the abscissa of the center position of the 3D candidate frame, Gy is the ordinate of the center position of the 3D candidate frame, GzAs z-coordinate of 3D candidate frame center position, GwIs the width of the 3D candidate box, GhHigh for 3D candidate box, GlIs the length of the 3D candidate box, GθIs the angle of the 3D candidate frame; dxAs an offset of the abscissa of the center position of the 3D candidate frame, DyAs an offset of the ordinate of the 3D candidate frame center position, DzThe offset of the z coordinate of the 3D candidate frame center position is obtained; dw is the offset of the width of the 3D candidate frame, dh is the offset of the height of the 3D candidate frame; dl is the offset of the 3D candidate frame length, DθIs the offset of the 3D candidate frame angle; rxTo predict the abscissa of the central position of the bounding box, RyFor predicting the ordinate, R, of the position of the centre of the bounding boxzFor predicting the ordinate, R, of the position of the centre of the bounding boxwTo predict the width of the bounding box, RhTo predict the height of the bounding box, RlPredicting the bounding box length, R, for 3DθIs to predict the height of the bounding box.
According to the above construction method, preferably, the specific operation of obtaining the training sample is: collecting point cloud data containing detection targets, framing boundary frames of all the detection targets in the point cloud data by adopting a labeling tool, labeling position coordinates (x, y, z, w, l, h and theta) of each boundary frame in a space and classification labels of the detection targets in each boundary frame, and taking the labeled point cloud data as a training sample; wherein x is the x-axis coordinate of the center of the bounding box, y is the y-axis coordinate of the center of the bounding box, z is the z-axis coordinate of the center of the bounding box, w is the width of the bounding box, l represents the length of the bounding box, h is the height of the bounding box, and theta is the angle of the projection of the bounding box to the x-y plane. More preferably, the detection target is any one of a vehicle, a pedestrian, or a bicycle.
The invention also provides a method for detecting the point cloud data target by utilizing the multi-plane coding point cloud characteristic deep learning model constructed by the construction method.
Compared with the prior art, the invention has the following positive beneficial effects:
in the existing point cloud data target detection method, when Pillars sampling is carried out on spatial point cloud data, the spatial point cloud data is only sampled on an x-y plane, a strut obtained by only sampling on the x-y plane does not contain complete spatial information of the point cloud data, and when the strut obtained by sampling on the x-y plane is subsequently detected and analyzed, the point cloud spatial information is lost, so that the target detection accuracy and precision are low. The invention constructs a novel pointpilars-based multi-plane coding point cloud feature deep learning model, which can respectively sample an x-y plane, an x-z plane and a y-z plane of space point cloud data to obtain the features of point clouds in three plane struts, learn the features of the point clouds in the three plane struts through PointNet, fuse the point cloud features in the three plane struts after learning, and then analyze and detect the fused point cloud features. Therefore, the point cloud data three-dimensional space sampling can be realized by the multi-plane coding point cloud characteristic deep learning model based on pointpilars, the point cloud characteristics of the three in-plane support columns obtained by sampling are learned and fused, the problem of information loss of the existing point cloud sampling space is solved, the acquisition of point cloud characteristic information in multiple directions of the whole space is enhanced, the loss of detection precision caused by different angles of the point cloud in each direction in the space is better reduced, and the shape and position characteristics of an object in different directions are better acquired by extracting the fusion of the point cloud characteristics in the support columns on three planes, so that the robustness and the detection accuracy of the detection model are improved.
Drawings
FIG. 1 is a flow chart of the training process of the multi-plane coding point cloud feature deep learning model based on pointpilars.
Fig. 2 is a schematic structural diagram of a backhaul network.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the scope of the present invention is not limited thereto.
Example 1:
a construction method of a multi-plane coding point cloud feature deep learning model based on pointpilars comprises the following steps:
the method comprises the following steps: and acquiring a training sample, wherein the training sample comprises point cloud data containing a detection target and marking information corresponding to the point cloud data, and the marking information is used for indicating a boundary box coordinate of the detection target in the point cloud data and a classification label of the detection target in the boundary box coordinate.
Step two: and training the multi-plane coding point cloud characteristic deep learning model by adopting the training sample, so that the point cloud data in the training sample is input into the trained multi-plane coding point cloud characteristic deep learning model to obtain a recognition result, namely the position boundary box coordinates of the detection target in the point cloud data and the existence probability of the target in the boundary box coordinates.
The specific operation of obtaining the training sample in the first step is as follows:
collecting point cloud data containing detection targets, wherein the detection targets are vehicles, adopting a marking tool frame to output boundary frames of all the detection targets in the point cloud data, marking position coordinates (x, y, z, w, l, h and theta) of each boundary frame in space and classification labels of the detection targets in each boundary frame, and then taking the marked point cloud data as training samples; wherein x is the x-axis coordinate of the center of the bounding box, y is the y-axis coordinate of the center of the bounding box, z is the z-axis coordinate of the center of the bounding box, w is the width of the bounding box, l represents the length of the bounding box, h is the height of the bounding box, and theta is the angle of the projection of the bounding box to the x-y plane.
In the second step, the multi-plane coding point cloud characteristic deep learning model is improved based on the pointpilars algorithm, and the specific improvement is as follows: a multi-plane fusion feature coding network is adopted to replace a feature coder network in the pointpilars algorithm; the multi-plane coding point cloud feature deep learning model consists of a multi-plane fusion feature coding network, a backhaul network and a protection Head network, wherein the backhaul network and the protection Head network are both the original backhaul network and the protection Head network in the pointpilars algorithm, and the network structures are unchanged. The input of the multi-plane fusion feature coding network is point cloud data, and the output is a sparse pseudo image converted from the fusion features of the point cloud; the input of the Backbone network is a sparse pseudo image, and the output of the Backbone network is a convolution characteristic diagram of the sparse pseudo image; the input of the Detection Head network is a convolution characteristic diagram output by the backhaul network, and the output is a prediction boundary box coordinate of a Detection target in the point cloud data and the probability of the target existing in the prediction boundary box.
In the second step, the specific steps of training the multi-plane coding point cloud feature deep learning model by using the training samples are as follows (as shown in fig. 1):
(1) inputting a training sample into a multi-plane fusion feature coding network, and performing fusion coding on the features of point cloud data x-y plane, x-z plane and y-z plane in the training sample by the multi-plane fusion feature coding network to obtain the fusion feature of point cloud of the x-y plane, and converting the fusion feature of the point cloud into a sparse pseudo image.
The specific operation of the step (1) is as follows:
(1a) dispersing point cloud data in a training sample on a grid uniformly spaced on an x-y plane, without limitation in the z direction, creating a series of struts on the x-y plane, wherein the point cloud contained in each strut is represented by r, xc,yc,zc,xp,ypFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,xp,yp) The expanded point cloud characteristic dimension D is 9; wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; x is the number ofp,ypRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane;
(1b) on an x-y plane, the number of point clouds contained in all non-empty struts is adjusted to be consistent, then a dense tensor (D, P, N) is created according to the number of the non-empty struts on the plane, the number of the point clouds contained in the non-empty struts and the characteristics of the point clouds in the non-empty struts, and the characteristics (D, P, N) of each non-empty strut on the x-y plane are obtained, wherein D represents the characteristic dimension of the point clouds in the non-empty struts, P represents the number of the non-empty struts on the x-y plane, and N represents the number of the point clouds contained in the non-empty struts;
(1c) performing feature learning on the features (D, P, N) of the non-empty pillars on the x-y plane by adopting a PointNet network, and obtaining final features (C, P, N) of the point cloud in each non-empty pillar on the x-y plane after learning; c represents a new characteristic dimension obtained after the point cloud is learned through a PointNet network, P represents the number of non-empty struts on an x-y plane, and N represents the number of the point clouds contained in the non-empty struts;
(1d) respectively dispersing point cloud data in a training sample on grids uniformly spaced on an x-z plane, wherein the y direction is not limited, the size of the x-z plane is the same as that of the x-y plane, creating a series of struts on the x-z plane, and the point cloud contained in each strut is represented by r, xc,yc,zc,xp,zpFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,xp,zp) The expanded point cloud characteristic dimension D is 9; wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; x is the number ofp,zpRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane; then, according to the operations of the steps (1b) to (1C), obtaining the final point cloud characteristics (C, P, N) of each non-empty pillar on the x-z plane;
(1e) respectively dispersing point cloud data in a training sample on grids uniformly spaced on a y-z plane, wherein the x direction is not limited, the size of the y-z plane is the same as that of the x-y plane, creating a series of struts on the y-z plane, and the point cloud contained in each strut is represented by r, xc,yc,zc,yp,zpFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,yp,zp) The expanded point cloud characteristic dimension D is 9; wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; y isp,zpRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane; then, according to the operations of the steps (1b) to (1C), obtaining the final point cloud characteristics (C, P, N) of each non-empty pillar on the y-z plane;
(1f) the method comprises the steps of superposing final characteristics of point clouds of all non-empty pillars on an x-z plane and a y-z plane with final characteristics of the point clouds in all the non-empty pillars on the x-y plane to obtain fused characteristics (3C, P and N) of the point clouds on the x-y plane, processing the fused characteristics (3C, P and N) by adopting a maximum pooling operation to obtain tensors (3C and P), and then creating sparse pseudo images (3C, H and W) according to the tensors (3C and P), wherein H represents the height of the sparse pseudo images, and W represents the width of the sparse pseudo images.
(2) And inputting the sparse pseudo image into a backhaul network for feature extraction to obtain a convolution feature map of the sparse pseudo image. The backhaul network is an original backhaul network in the pointpilars algorithm, and the backhaul network (as shown in fig. 2) is a network structure known to those skilled in the art.
In the step (2), a backhaul network is adopted to extract the features of the sparse pseudo-image, a convolution kernel traverses the whole sparse pseudo-image from left to right and from top to bottom, and the dimensions of a feature graph output after the input sparse pseudo-image is convolved by each layer are as follows:
W2=(W1-F+2P)/S+1(I)
H2=(H1-F+2P)/S+1(II)
D2=K(III)
wherein, W1,H1Inputting the width, height and depth of the characteristic diagram before the convolution layer; w2,H2,D2Respectively the width, height and depth of the convolved output characteristic diagram; k is the number of convolution kernels; f is the convolution kernel size of the convolution layer(ii) a P is the zero filling quantity of the convolution layer input characteristic diagram; s is the step length.
(3) And inputting the convolution feature map of the sparse pseudo image into a Detection Head network to obtain the coordinates of a prediction boundary box of the detected target in the point cloud data and the probability of the target existing in the prediction boundary box. The Detection Head network is an original Detection Head network in the pointpilars algorithm, and the Detection Head network is a network structure known to those skilled in the art.
The specific operation of the step (3) is as follows:
(3a) and inputting the convolution characteristic diagram of the sparse pseudo-image into a Detection Head network, and finding out the central coordinates of each position characteristic of the convolution characteristic diagram on an x-y sampling plane according to the mapping relation of the receptive field. And setting a 3D preset frame according to a central coordinate in the x-y sampling plane, wherein each central coordinate is respectively provided with two 3D preset frames with different angles of 0 degree and 90 degrees, and the size of each 3D preset frame is the same as the average size of the detection target boundary frame marked in the training sample. Then, projecting the 3D preset frame and the marked detection target boundary frame on an x-y plane, calculating IoU, comparing IoU obtained by calculation with a set threshold value, and screening out a 3D candidate frame from the 3D preset frame; wherein IoU is greater than the predetermined threshold, the 3D default frame is the 3D candidate frame.
(3b) Performing frame regression on the 3D candidate frame obtained by screening in the step (3a) to obtain the coordinate correction offset (D) of the 3D candidate framex,dy,dz,dw,dh,dl,dθ) Calculating the coordinate correction offset of the candidate frame according to the initial position coordinate of the 3D candidate frame and the coordinate correction offset of the candidate frame obtained by frame regression according to formulas (IV) to (X) to obtain the position coordinate (R) of the predicted boundary frame of the detection targetx,Ry,Rz,Rw,Rh,Rl,Rθ) And outputting and simultaneously outputting the probability of the detection target existing in the prediction boundary box.
Rx=Gx×dx+Gx (Ⅳ)
Ry=Gy×dy+Gy (Ⅴ)
Rz=Gz×dz+Gy (Ⅵ)
Rw=Gw×edw(Ⅶ)
Rh=Gh×edh(Ⅷ)
Rl=Gl×edl(Ⅸ)
Rθ=Gθ×dθ+Gθ(Ⅹ)
Wherein G isxAs the abscissa of the center position of the 3D candidate frame, Gy is the ordinate of the center position of the 3D candidate frame, GzAs z-coordinate of 3D candidate frame center position, GwIs the width of the 3D candidate box, GhHigh for 3D candidate box, GlIs the length of the 3D candidate box, GθIs the angle of the 3D candidate frame; dxAs an offset of the abscissa of the center position of the 3D candidate frame, DyAs an offset of the ordinate of the 3D candidate frame center position, DzThe offset of the z coordinate of the 3D candidate frame center position is obtained; dw is the offset of the width of the 3D candidate frame, dh is the offset of the height of the 3D candidate frame; dl is the offset of the 3D candidate frame length, DθIs the offset of the 3D candidate frame angle; rxTo predict the abscissa of the central position of the bounding box, RyFor predicting the ordinate, R, of the position of the centre of the bounding boxzFor predicting the ordinate, R, of the position of the centre of the bounding boxwTo predict the width of the bounding box, RhTo predict the height of the bounding box, RlPredicting the bounding box length, R, for 3DθIs to predict the height of the bounding box.
(4) And (3) taking the predicted boundary frame coordinate obtained in the step (3) as a predicted result, taking the boundary frame coordinate marked in the training sample as a real result, constructing a loss function according to the predicted result and the real result, adopting a cross entropy loss function (the cross entropy loss function is well known in the field) as the loss function, optimizing network parameters of the multi-plane coding point cloud feature deep learning model through a random gradient descent algorithm, reducing the numerical value of the loss function, continuously iterating the process to optimize the network parameters until the loss function stops descending, and ending the training process of the multi-plane coding point cloud feature deep learning model to obtain the trained multi-plane coding point cloud feature deep learning model.
Example 2:
a method for detecting a point cloud data target by utilizing the multi-plane coding point cloud characteristic deep learning model based on pointpilars constructed in embodiment 1 is characterized in that collected point cloud data are input into the multi-plane coding point cloud characteristic deep learning model for calculation, and the multi-plane coding point cloud characteristic deep learning model finally outputs the boundary frame coordinates of the detection target of the point cloud data and the probability of the detection target existing in the boundary frame coordinates.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention, but rather as the following description is intended to cover all modifications, equivalents and improvements falling within the spirit and scope of the present invention.
Claims (10)
1. A construction method of a multi-plane coding point cloud feature deep learning model based on pointpilars is characterized by comprising the following steps: acquiring a training sample, wherein the training sample comprises point cloud data containing a detection target and marking information corresponding to the point cloud data, and the marking information is used for indicating a boundary box coordinate of the detection target in the point cloud data and a classification label of the detection target in the boundary box coordinate; and training the multi-plane coding point cloud characteristic deep learning model by adopting the training sample, so that the point cloud data in the training sample is input into the trained multi-plane coding point cloud characteristic deep learning model to obtain a recognition result, namely the position boundary box coordinates of the detection target in the point cloud data and the existence probability of the target in the boundary box coordinates.
2. The construction method according to claim 1, wherein the multi-plane coding point cloud feature deep learning model is an improvement based on pointpilars algorithm, and the specific improvement is as follows: a multi-plane fusion feature coding network is adopted to replace a feature coder network in the pointpilars algorithm; the multi-plane coding point cloud feature deep learning model consists of a multi-plane fusion feature coding network, a backhaul network and a protection Head network.
3. The construction method according to claim 2, wherein the specific steps of training the multi-plane coding point cloud feature deep learning model by using the training samples are as follows:
(1) inputting a training sample into a multi-plane fusion feature coding network, and performing fusion coding on the features of point cloud data x-y plane, x-z plane and y-z plane in the training sample by the multi-plane fusion feature coding network to obtain the fusion feature of point cloud of the x-y plane and convert the fusion feature of the point cloud into a sparse pseudo image;
(2) inputting the sparse pseudo image into a backhaul network for feature extraction to obtain a convolution feature map of the sparse pseudo image;
(3) inputting the convolution characteristic diagram of the sparse pseudo image into a Detection Head network to obtain the coordinates of a prediction boundary box of a detected target in the point cloud data and the probability of the target existing in the prediction boundary box;
(4) and (4) taking the predicted boundary frame coordinate obtained in the step (3) as a predicted result, taking the boundary frame coordinate marked in the training sample as a real result, constructing a loss function according to the predicted result and the real result, adopting a cross entropy loss function as the loss function, optimizing network parameters of the multi-plane coding point cloud feature deep learning model through a random gradient descent algorithm, reducing the numerical value of the loss function, continuously iterating the process to optimize the network parameters until the loss function stops descending, finishing the training process of the multi-plane coding point cloud feature deep learning model, and obtaining the trained multi-plane coding point cloud feature deep learning model.
4. The method for constructing the multi-plane coding point cloud feature deep learning model based on pointpilars as claimed in claim 3, wherein the specific operation of the step (1) is as follows:
(1a) dispersing point cloud data in a training sample on a grid uniformly spaced on an x-y plane, without limitation in the z direction, creating a series of struts on the x-y plane, wherein the point cloud contained in each strut is represented by r, xc,yc,zc,xp,ypFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,xp,yp) The expanded point cloud characteristic dimension D is 9; wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; x is the number ofp,ypRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane;
(1b) on an x-y plane, the number of point clouds contained in all non-empty struts is adjusted to be consistent, then a dense tensor (D, P, N) is created according to the number of the non-empty struts on the plane, the number of the point clouds contained in the non-empty struts and the characteristics of the point clouds in the non-empty struts, and the characteristics (D, P, N) of each non-empty strut on the x-y plane are obtained, wherein D represents the characteristic dimension of the point clouds in the non-empty struts, P represents the number of the non-empty struts on the x-y plane, and N represents the number of the point clouds contained in the non-empty struts;
(1c) performing feature learning on the features (D, P, N) of the non-empty pillars on the x-y plane by adopting a PointNet network, and obtaining final features (C, P, N) of the point cloud in each non-empty pillar on the x-y plane after learning; c represents a new characteristic dimension obtained after the point cloud is learned through a PointNet network, P represents the number of non-empty struts on an x-y plane, and N represents the number of the point clouds contained in the non-empty struts;
(1d) respectively dispersing point cloud data in a training sample on grids uniformly spaced on an x-z plane, without limitation in the y direction, creating a series of struts on the x-z plane, wherein the point cloud contained in each strut is r, xc,yc,zc,xp,zpFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,xp,zp) (ii) a Wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; x is the number ofp,zpRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane; then, according to the operations of the steps (1b) to (1c), obtaining the final point cloud of each non-empty pillar on the x-z planeFeatures (C, P, N);
(1e) respectively dispersing point cloud data in a training sample on grids uniformly spaced on a y-z plane, without limitation in the x direction, creating a series of struts on the y-z plane, wherein the point cloud contained in each strut is r, xc,yc,zc,yp,zpFeature expansion to obtain expanded point cloud features (x, y, z, r, x)c,yc,zc,yp,zp) (ii) a Wherein x, y, z represent the initial coordinate values of the point cloud; r represents the point cloud reflectivity; x is the number ofc,yc,zcRepresenting the coordinate value obtained by arithmetic mean of all point cloud coordinates in the strut; y isp,zpRepresenting deviations of all point clouds in the strut relative to the coordinate center position under the coordinate system of the current plane; then, according to the operations of the steps (1b) to (1C), obtaining the final point cloud characteristics (C, P, N) of each non-empty pillar on the y-z plane;
(1f) the method comprises the steps of superposing final characteristics of point clouds of all non-empty pillars on an x-z plane and a y-z plane with final characteristics of the point clouds in all the non-empty pillars on the x-y plane to obtain fused characteristics (3C, P and N) of the point clouds on the x-y plane, processing the fused characteristics (3C, P and N) by adopting a maximum pooling operation to obtain tensors (3C and P), and then creating sparse pseudo images (3C, H and W) according to the tensors (3C and P), wherein H represents the height of the sparse pseudo images, and W represents the width of the sparse pseudo images.
5. The method of claim 4, wherein the x-z plane and the y-z plane are the same size as the x-y plane.
6. The construction method according to claim 3, wherein in the step (2), a backsbone network is adopted to perform feature extraction on the sparse pseudo-image, a convolution kernel traverses the whole sparse pseudo-image from left to right and from top to bottom, and the dimensionality of a feature graph output after each layer of convolution of the input sparse pseudo-image is as follows:
W2=(W1-F+2P)/S+1 (I)
H2=(H1-F+2P)/S+1 (II)
D2=K (III)
wherein, W1,H1Inputting the width, height and depth of the characteristic diagram before the convolution layer; w2,H2,D2Respectively the width, height and depth of the convolved output characteristic diagram; k is the number of convolution kernels; f is the convolution kernel size of the convolution layer; p is the zero filling quantity of the convolution layer input characteristic diagram; s is the step length.
7. The construction method according to any one of claims 3 to 6, wherein the specific operation of step (3) is:
(3a) inputting the convolution characteristic diagram of the sparse pseudo-image into a Detection Head network, and finding out the central coordinates of each position characteristic of the convolution characteristic diagram on an x-y sampling plane according to the mapping relation of the receptive field; setting 3D preset frames according to central coordinates in an x-y sampling plane, setting two 3D preset frames with different angles at each central coordinate, wherein the size of each 3D preset frame is the same as the average size of a detection target boundary frame marked in a training sample, then calculating IoU after the 3D preset frames and the marked detection target boundary frames are projected on the x-y plane, comparing IoU obtained by calculation with a set threshold value, and screening out 3D candidate frames from the 3D preset frames; wherein IoU is greater than the set threshold, the 3D default frame is a 3D candidate frame;
(3b) performing frame regression on the 3D candidate frame obtained by screening in the step (3a) to obtain the coordinate correction offset of the 3D candidate frame, calculating according to the initial position coordinate of the 3D candidate frame and the coordinate correction offset of the candidate frame obtained by frame regression to obtain the position coordinate of the predicted boundary frame of the detection target, outputting the position coordinate of the predicted boundary frame of the detection target, and simultaneously outputting the probability that the detection target exists in the predicted boundary frame.
8. The building method according to claim 7, wherein the angles of the two 3D preset frames set at each center coordinate in the step (3a) are 0 degree and 90 degrees, respectively.
9. The constructing method according to claim 7, wherein the specific calculation process of the position coordinates of the predicted bounding box of the detection target in the step (3c) is as follows: calculating the position coordinates of the predicted boundary frame according to formulas (IV) to (X) according to the initial position coordinates of the 3D candidate frame and the coordinate correction offset of the candidate frame output by the frame regression network;
Rx=Gx×dx+Gx (Ⅳ)
Ry=Gy×dy+Gy (Ⅴ)
Rz=Gz×dz+Gy (Ⅵ)
Rw=Gw×edw(Ⅶ)
Rh=Gh×edh(Ⅷ)
Rl=Gl×edl(Ⅸ)
Rθ=Gθ×dθ+Gθ(Ⅹ)
wherein G isxAs the abscissa of the center position of the 3D candidate frame, Gy is the ordinate of the center position of the 3D candidate frame, GzAs z-coordinate of 3D candidate frame center position, GwIs the width of the 3D candidate box, GhHigh for 3D candidate box, GlIs the length of the 3D candidate box, GθIs the angle of the 3D candidate frame; dxAs an offset of the abscissa of the center position of the 3D candidate frame, DyAs an offset of the ordinate of the 3D candidate frame center position, DzThe offset of the z coordinate of the 3D candidate frame center position is obtained; dw is the offset of the width of the 3D candidate frame, dh is the offset of the height of the 3D candidate frame; dl is the offset of the 3D candidate frame length, DθIs the offset of the 3D candidate frame angle; rxTo predict the abscissa of the central position of the bounding box, RyFor predicting the ordinate, R, of the position of the centre of the bounding boxzFor predicting the ordinate, R, of the position of the centre of the bounding boxwTo predict the width of the bounding box, RhTo predict the height of the bounding box, RlPredicting the bounding box length, R, for 3DθIs to predict the height of the bounding box.
10. A method for detecting a point cloud data target by using a pointpilars-based multi-plane coding point cloud feature deep learning model constructed by the construction method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010425656.3A CN111612059B (en) | 2020-05-19 | 2020-05-19 | Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010425656.3A CN111612059B (en) | 2020-05-19 | 2020-05-19 | Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111612059A true CN111612059A (en) | 2020-09-01 |
CN111612059B CN111612059B (en) | 2022-10-21 |
Family
ID=72204944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010425656.3A Active CN111612059B (en) | 2020-05-19 | 2020-05-19 | Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612059B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200129A (en) * | 2020-10-28 | 2021-01-08 | 中国人民解放军陆军航空兵学院陆军航空兵研究所 | Three-dimensional target detection method and device based on deep learning and terminal equipment |
CN112418084A (en) * | 2020-11-23 | 2021-02-26 | 同济大学 | Three-dimensional target detection method based on point cloud time sequence information fusion |
CN112613378A (en) * | 2020-12-17 | 2021-04-06 | 上海交通大学 | 3D target detection method, system, medium and terminal |
CN112668469A (en) * | 2020-12-28 | 2021-04-16 | 西安电子科技大学 | Multi-target detection and identification method based on deep learning |
CN112883789A (en) * | 2021-01-15 | 2021-06-01 | 福建电子口岸股份有限公司 | Bowling prevention method and system based on laser vision fusion and deep learning |
CN113111974A (en) * | 2021-05-10 | 2021-07-13 | 清华大学 | Vision-laser radar fusion method and system based on depth canonical correlation analysis |
CN113421305A (en) * | 2021-06-29 | 2021-09-21 | 上海高德威智能交通系统有限公司 | Target detection method, device, system, electronic equipment and storage medium |
CN113903029A (en) * | 2021-12-10 | 2022-01-07 | 智道网联科技(北京)有限公司 | Method and device for marking 3D frame in point cloud data |
CN114005110A (en) * | 2021-12-30 | 2022-02-01 | 智道网联科技(北京)有限公司 | 3D detection model training method and device, and 3D detection method and device |
CN114397877A (en) * | 2021-06-25 | 2022-04-26 | 南京交通职业技术学院 | Intelligent automobile automatic driving system |
CN115131619A (en) * | 2022-08-26 | 2022-09-30 | 北京江河惠远科技有限公司 | Extra-high voltage part sorting method and system based on point cloud and image fusion |
CN115147834A (en) * | 2022-09-06 | 2022-10-04 | 南京航空航天大学 | Aircraft stringer plane feature extraction method, device and equipment based on point cloud |
CN116863433A (en) * | 2023-09-04 | 2023-10-10 | 深圳大学 | Target detection method based on point cloud sampling and weighted fusion and related equipment |
WO2024007268A1 (en) * | 2022-07-07 | 2024-01-11 | Oppo广东移动通信有限公司 | Point cloud encoding method, point clod decoding method, codec, and computer storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171217A (en) * | 2018-01-29 | 2018-06-15 | 深圳市唯特视科技有限公司 | A kind of three-dimension object detection method based on converged network |
CN110060288A (en) * | 2019-03-15 | 2019-07-26 | 华为技术有限公司 | Generation method, device and the storage medium of point cloud characteristic pattern |
CN110111328A (en) * | 2019-05-16 | 2019-08-09 | 上海中认尚科新能源技术有限公司 | A kind of blade crack of wind driven generator detection method based on convolutional neural networks |
US20200134372A1 (en) * | 2018-10-26 | 2020-04-30 | Volvo Car Corporation | Methods and systems for the fast estimation of three-dimensional bounding boxes and drivable surfaces using lidar point clouds |
-
2020
- 2020-05-19 CN CN202010425656.3A patent/CN111612059B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171217A (en) * | 2018-01-29 | 2018-06-15 | 深圳市唯特视科技有限公司 | A kind of three-dimension object detection method based on converged network |
US20200134372A1 (en) * | 2018-10-26 | 2020-04-30 | Volvo Car Corporation | Methods and systems for the fast estimation of three-dimensional bounding boxes and drivable surfaces using lidar point clouds |
CN110060288A (en) * | 2019-03-15 | 2019-07-26 | 华为技术有限公司 | Generation method, device and the storage medium of point cloud characteristic pattern |
CN110111328A (en) * | 2019-05-16 | 2019-08-09 | 上海中认尚科新能源技术有限公司 | A kind of blade crack of wind driven generator detection method based on convolutional neural networks |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200129B (en) * | 2020-10-28 | 2024-07-26 | 中国人民解放军陆军航空兵学院陆军航空兵研究所 | Three-dimensional target detection method and device based on deep learning and terminal equipment |
CN112200129A (en) * | 2020-10-28 | 2021-01-08 | 中国人民解放军陆军航空兵学院陆军航空兵研究所 | Three-dimensional target detection method and device based on deep learning and terminal equipment |
CN112418084B (en) * | 2020-11-23 | 2022-12-16 | 同济大学 | Three-dimensional target detection method based on point cloud time sequence information fusion |
CN112418084A (en) * | 2020-11-23 | 2021-02-26 | 同济大学 | Three-dimensional target detection method based on point cloud time sequence information fusion |
CN112613378A (en) * | 2020-12-17 | 2021-04-06 | 上海交通大学 | 3D target detection method, system, medium and terminal |
CN112613378B (en) * | 2020-12-17 | 2023-03-28 | 上海交通大学 | 3D target detection method, system, medium and terminal |
CN112668469A (en) * | 2020-12-28 | 2021-04-16 | 西安电子科技大学 | Multi-target detection and identification method based on deep learning |
CN112883789A (en) * | 2021-01-15 | 2021-06-01 | 福建电子口岸股份有限公司 | Bowling prevention method and system based on laser vision fusion and deep learning |
US11532151B2 (en) | 2021-05-10 | 2022-12-20 | Tsinghua University | Vision-LiDAR fusion method and system based on deep canonical correlation analysis |
CN113111974B (en) * | 2021-05-10 | 2021-12-14 | 清华大学 | Vision-laser radar fusion method and system based on depth canonical correlation analysis |
CN113111974A (en) * | 2021-05-10 | 2021-07-13 | 清华大学 | Vision-laser radar fusion method and system based on depth canonical correlation analysis |
CN114397877A (en) * | 2021-06-25 | 2022-04-26 | 南京交通职业技术学院 | Intelligent automobile automatic driving system |
CN113421305A (en) * | 2021-06-29 | 2021-09-21 | 上海高德威智能交通系统有限公司 | Target detection method, device, system, electronic equipment and storage medium |
CN113903029A (en) * | 2021-12-10 | 2022-01-07 | 智道网联科技(北京)有限公司 | Method and device for marking 3D frame in point cloud data |
CN114005110A (en) * | 2021-12-30 | 2022-02-01 | 智道网联科技(北京)有限公司 | 3D detection model training method and device, and 3D detection method and device |
WO2024007268A1 (en) * | 2022-07-07 | 2024-01-11 | Oppo广东移动通信有限公司 | Point cloud encoding method, point clod decoding method, codec, and computer storage medium |
CN115131619A (en) * | 2022-08-26 | 2022-09-30 | 北京江河惠远科技有限公司 | Extra-high voltage part sorting method and system based on point cloud and image fusion |
CN115131619B (en) * | 2022-08-26 | 2022-11-22 | 北京江河惠远科技有限公司 | Extra-high voltage part sorting method and system based on point cloud and image fusion |
CN115147834A (en) * | 2022-09-06 | 2022-10-04 | 南京航空航天大学 | Aircraft stringer plane feature extraction method, device and equipment based on point cloud |
CN115147834B (en) * | 2022-09-06 | 2023-05-05 | 南京航空航天大学 | Point cloud-based plane feature extraction method, device and equipment for airplane stringer |
CN116863433B (en) * | 2023-09-04 | 2024-01-09 | 深圳大学 | Target detection method based on point cloud sampling and weighted fusion and related equipment |
CN116863433A (en) * | 2023-09-04 | 2023-10-10 | 深圳大学 | Target detection method based on point cloud sampling and weighted fusion and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111612059B (en) | 2022-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111612059B (en) | Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars | |
CN111798475B (en) | Indoor environment 3D semantic map construction method based on point cloud deep learning | |
CN111815776B (en) | Fine geometric reconstruction method for three-dimensional building integrating airborne and vehicle-mounted three-dimensional laser point clouds and street view images | |
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
US11971726B2 (en) | Method of constructing indoor two-dimensional semantic map with wall corner as critical feature based on robot platform | |
CN109598794B (en) | Construction method of three-dimensional GIS dynamic model | |
Benenson et al. | Stixels estimation without depth map computation | |
US20170116781A1 (en) | 3d scene rendering | |
CN108389256B (en) | Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method | |
CN110097553A (en) | The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system | |
CN113516664A (en) | Visual SLAM method based on semantic segmentation dynamic points | |
CN107093205A (en) | A kind of three dimensions building window detection method for reconstructing based on unmanned plane image | |
CN113139453A (en) | Orthoimage high-rise building base vector extraction method based on deep learning | |
Wang et al. | Window detection from mobile LiDAR data | |
CN111880191B (en) | Map generation method based on multi-agent laser radar and visual information fusion | |
CN111402632B (en) | Risk prediction method for pedestrian movement track at intersection | |
CN112818925A (en) | Urban building and crown identification method | |
Zelener et al. | Cnn-based object segmentation in urban lidar with missing points | |
CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
CN114049572A (en) | Detection method for identifying small target | |
CN113920254B (en) | Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof | |
CN116573017A (en) | Urban rail train running clearance foreign matter sensing method, system, device and medium | |
CN114943870A (en) | Training method and device of line feature extraction model and point cloud matching method and device | |
Nagy et al. | 3D CNN based phantom object removing from mobile laser scanning data | |
CN114463713A (en) | Information detection method and device of vehicle in 3D space and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |