CN112561922A - Neural network-based distorted three-dimensional point cloud segmentation method - Google Patents
Neural network-based distorted three-dimensional point cloud segmentation method Download PDFInfo
- Publication number
- CN112561922A CN112561922A CN202011344486.2A CN202011344486A CN112561922A CN 112561922 A CN112561922 A CN 112561922A CN 202011344486 A CN202011344486 A CN 202011344486A CN 112561922 A CN112561922 A CN 112561922A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- segmentation
- space
- point
- distorted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 2
- 230000001737 promoting effect Effects 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 238000010008 shearing Methods 0.000 claims 1
- 238000003062 neural network model Methods 0.000 abstract 1
- 238000012545 processing Methods 0.000 description 3
- 230000014616 translation Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a distortion point cloud segmentation method based on a neural network, which utilizes the fitting capability of a neural network model to construct a depth network model for vehicle-mounted distortion point cloud segmentation on the basis of an original space point cloud, and comprises the following steps: the system comprises a point cloud lifting network module, a lifting space normalized point cloud network module and a coding prediction network module, can accurately and efficiently segment three-dimensional point clouds with any distortion, and belongs to the technical field of three-dimensional point cloud segmentation.
Description
Technical Field
The invention belongs to the technical field of three-dimensional point cloud segmentation, and particularly provides a novel distorted three-dimensional point cloud segmentation method based on a neural network.
Background
The laser vehicle-mounted radar has a wide detection range and a small volume, and is widely applied to unmanned driving. The laser vehicle-mounted radar can acquire three-dimensional point cloud data of an environment in real time and perform real-time segmentation by combining a point cloud algorithm, so that the system is facilitated to fully understand the attributes of a scene where an automobile is located so as to make corresponding decisions, and the system is the 'eyes' of an unmanned automobile. However, due to factors such as visual angle change and jitter generated by the vehicle-mounted radar during vehicle-mounted operation, the shape and posture of the acquired three-dimensional point cloud are easy to change, and distortion exists between the generated point cloud and a real object. Therefore, the existing point cloud segmentation algorithm is difficult to be directly applied to the distorted point cloud generated by unmanned driving, and the unmanned vehicle on a bumpy road section cannot normally run.
In order to solve the problem, the traditional point cloud segmentation method usually depends on the characteristics which are not influenced by distortion in the point cloud, such as difference and distance between extracted point coordinates, but the existing methods only can solve rigid transformation generated by translation and rotation of the point cloud and cannot solve the problem of point cloud object formation generated by vehicle bump. Other methods learn a canonical matrix from the distorted point cloud to normalize the input point cloud to a uniform morphology. The method can theoretically solve all the distortions which can be expressed by a standard matrix, but has high precision requirement on the learning of the matrix, so that real-time processing can not be realized in the unmanned driving.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a distorted point cloud segmentation method based on a neural network, which can accurately and efficiently segment any distorted three-dimensional point cloud by utilizing the strong fitting capability of the neural network.
In order to achieve the purpose, the invention constructs a depth network model for vehicle-mounted distorted point cloud segmentation on the basis of an original point cloud, and the depth network model comprises three processing modules: the system comprises a point cloud lifting network module, a point cloud network module normalized in a lifting space and a coding prediction network module, and realizes accurate and efficient segmentation of distorted three-dimensional point cloud collected by a vehicle-mounted radar.
The technical scheme of the invention is as follows (see the flow chart in figure 1):
a distorted point cloud segmentation method based on neural network is characterized in that,
the method utilizes a neural network to extract rich normalized information of point clouds in different spaces and segments the distorted point cloud;
the specific implementation steps are as follows:
A. constructing a depth network model for vehicle-mounted distortion point cloud segmentation, comprising
A1) A point cloud lifting network module;
A2) a space point cloud standardization network module is promoted;
A3) a code prediction network module;
the point cloud lifting network module is used for lifting the characteristic dimensionality of the point cloud, so that the network model can accurately learn the normalized matrix. The input of the module is a point cloud P of an original space with the size of 1024 multiplied by 3, and the point cloud P is composed of 1024 space point coordinates P1,p2,…,pi,…,p1024Is formed of each point piRepresented by three-dimensional spatial coordinates; the conversion of the point cloud from the original space to the lifting space is realized by a lifting function, and for each point pi=[x,y,z]And the coordinates of the high-dimensional points after the dimension is increased are as follows:
ν(P)=ν(x,y,z)=[x2,xy,xz,y2,yz,z2]
wherein the content of the first and second substances,is a lifting function for lifting the spatial dimension of the point cloud, while each point is represented as a higher order polynomial composed of the original coordinates. The network module outputs a point cloud v (P) that is a lifting space of size 1024 × 6.
And a space point cloud standardization network module is promoted, and in the training stage, the point cloud v (P) with the high-dimensional space size of 1024 multiplied by 6 is input. The output is the normalized 1024 × 6 point cloud v' (P) of the lifting space. In the specific implementation process, the corresponding distorted point cloud data acquired by the vehicle-mounted radar is input in the testing stage, and the data after the distorted data is corrected is output. Firstly, a neural network structure is built to extract the geometric characteristics of point cloud from the three-dimensional coordinates of the point cloud in the ascending-dimensional space, and a normalized matrix is further learned by utilizing the characteristics. The structure of the normalized network module for promoting the spatial point cloud is expressed as follows:
θP=Resize(Maxpool{ltlt-1…l1(ν(P)))
wherein v (P) is high-dimensional space point cloud with the size of 1024 multiplied by 6,representing the learned normalized matrix. ltlt-1…l1Is expressed as { li}i=1,2,…,tA multi-layer perceptron architecture using t layers (e.g., t-5) is shown, each layer using batch normalization and activation functions. Inputting the features extracted from the point cloud by the multilayer perceptron into a pooling layer Maxpool to pick out key parameters, and finally arranging the key parameters in sequence to obtain a 6 multiplied by 6 normalized matrix theta in a high-dimensional spaceP。
Using a normalized matrix thetaPAnd carrying out distortion correction on the high-dimensional space point cloud, and expressing as follows:
v′(P)=θP*v(P)
the normalized point cloud v' (P) of 1024 × 6 high-dimensional space is obtained.
The coding prediction network module is mainly used for coding the normalized 1024 × 6 high-dimensional space point cloud v' (P), and further predicting a final segmentation result through a coding value. The input is nu' (P), the output is the final segmentation result L, the size is 1024 multiplied by 1, and the classification predicted by each point in the corresponding point cloud is obtained. The coding prediction module mainly comprises 4 perception layers, 1 pooling layer and 3 full-connection layers. And each layer uses batch normalization and activation functions.
B. Training a depth network model for vehicle-mounted distorted point cloud segmentation by using a distortion-free point cloud data set to generate a point cloud segmentation model;
B1) pair-wise input point cloud data and corresponding segmentation labels;
B2) utilizing a network module to perform segmentation prediction on the input point cloud;
B3) dynamically adjusting each module parameter according to the segmented prediction result and the truth value label;
B4) after training, generating a trained point cloud segmentation model;
C. segmenting a distorted point cloud generated by the vehicle-mounted radar by using a trained point cloud segmentation model;
C1) simulating a distortion point cloud possibly generated by a vehicle-mounted radar in the unmanned driving process;
C2) and segmenting the distorted point cloud data by using a training model.
Compared with the prior art, the invention has the beneficial effects that:
the method is based on neural network to segment the distorted point cloud. Compared with the original method, the neural network can learn to a normalized matrix with higher expressive ability quickly and accurately in the processing process, and the segmentation precision of the model on the distorted point cloud acquired by the vehicle-mounted radar is greatly improved. The method can obtain the segmentation model only by training the normally collected point cloud, and is applied to the segmentation task of the distorted point cloud in the unmanned environment, so that the model training is facilitated.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
FIG. 2 is a schematic diagram of point cloud lifting.
FIG. 3 is a graph showing the results of the segmentation experiment.
Detailed description of the preferred embodiments
For better understanding of the technical solution of the present invention, the following detailed description is made with reference to the accompanying drawings.
As can be seen from the flow of the method of the present invention shown in FIG. 1, the overall process of the system consists of three stages: constructing a depth network model for distorted point cloud segmentation; training a depth network model for distorted point cloud segmentation by using a distortion-free point cloud data set; and performing distortion point cloud segmentation by using the model.
1. Constructing a depth network model for distorted point cloud segmentation;
this stage includes three modules: 1) a point cloud lifting module; 2) a point cloud normalization module; 3) a coding prediction module;
the module is a point cloud lifting module and is used for lifting the characteristic dimensionality of the point cloud, so that the network can accurately learn the normalized matrix. The input of the module is a point cloud P of an original space with the size of 1024 multiplied by 3, and the point cloud P is composed of 1024 space point coordinates P1,p2,…p1024Is formed of each point piRepresented by three-dimensional spatial coordinates; the conversion of the point cloud from the original space to the lifting space is realized by a lifting function, and for each point pi=[x,y,z]And the coordinates of the high-dimensional points after the dimension is increased are as follows:
ν(P)=ν(x,y,z)=[x2,xy,xz,y2,yz,z2]
is a lifting function designed by us, which raises the spatial dimension of the point cloud, and each point is represented as a higher order monomial composed of original coordinates. The network output is a point cloud v (P) of the lifting space of size 1024 × 6. Fig. 2 shows a schematic representation of the point cloud in two dimensions.
And a two-point cloud normalization module, wherein in the training stage, the input of the two-point cloud normalization module is point cloud v (P) with the high-dimensional space size of 1024 multiplied by 6. The output is the normalized 1024 × 6 point cloud v' (P) of the lifting space. And in the testing stage, distorted point cloud data acquired by the vehicle-mounted radar is input correspondingly, and data obtained after the distorted point cloud data are corrected is output. Firstly, a neural network structure is built to extract the geometric characteristics of point cloud from the three-dimensional coordinates of the point cloud in the ascending-dimensional space, and a normalized matrix is further learned by utilizing the characteristics. The structure of the whole neural network is as follows:
θP=Resize(Maxpool{ltlt-1…l1(ν(P)))
wherein ν (P) is a high-dimensional spatial point cloud having a size of 1024 × 6,representing the learned normalized matrix. { li}i=1,2,…,tThe representation uses a t-level multi-level perceptron architecture (t 5), each level using batch normalization and activation functions. Inputting the characteristics extracted from the point cloud by the multilayer perceptron into a Maxpool layer to select key parameters, and finally arranging the parameters in sequence to obtain a high-dimensional space 6 multiplied by 6 normalized matrix thetaP. Using a normalized matrix thetaPCarrying out distortion correction on the high-dimensional space point cloud:
v′(P)=θP*v(P)
the normalized point cloud v' (P) of 1024 × 6 high-dimensional space is obtained.
And the module three-coding prediction module is mainly used for coding the normalized 1024 × 6 high-dimensional space point cloud v' (P) and further predicting a final segmentation result through a coding value. The input is v' (P), the output is the final segmentation result L, the size is 1024 × 1, and the corresponding point cloud is the predicted category of each point. The module mainly comprises 4 perception layers, 1 pooling layer and 3 full-connection layers. And each layer uses batch normalization and activation functions.
2. Training a network by using a distortion-free point cloud data set to generate a point cloud segmentation model;
the point cloud data set in the training stage uses complete point clouds without distortion, wherein the complete point clouds include about 9000 point cloud objects P with the size of 1024 × 3, and consist of 1024 three-dimensional point coordinates (which relate to 40 object categories that may exist in actual driving such as vehicles, roads, trees, signs, and the like), and a segmentation label L corresponding to each object with the size of 1024 × 1, and the detailed category of each point (taking an automobile as an example, about 600 points are distributed on a vehicle body, 300 points are distributed on a tire, and 100 points are distributed on a rearview mirror).
Each point cloud object P in the existing dataset and the corresponding segmentation label L are input into the depth network. The network outputs a network prediction segmentation result L ' to each input point cloud P through calculation, wherein the size of the segmentation result L ' is the same as that of L, and the segmentation result L ' represents the predicted detailed classification of each point. And in each training process, continuously iterating and learning the parameters of the whole network by using a gradient descent algorithm according to the difference between the L and the L', finally adjusting the network to a convergent state, and finishing training to obtain an optimal model.
3. And performing distortion point cloud segmentation by using the model.
Finally, the model obtained after training can extract the features which are not affected by distortion according to the input point cloud.
To test that the model can correctly segment the point cloud generated by the drone, we simulated affine distortions that may exist in the drone, including random rotations, translations, shears, and zooms along three coordinate axes. Where the rotation angle is set to [ -15,15] degrees and the translation, shear and zoom parameters are all set to [0,1 ]. Meanwhile, distortion caused by random point missing and noise points in the actual vehicle-mounted radar acquisition process is simulated. This is mainly achieved by randomly reducing the number of point cloud points (1024, 512, 256 …) of the input model and adding gaussian noise (σ ═ 0.01,0.03,0.05 …) of different variances to the point coordinates.
The distorted point cloud data generated when the simulated vehicle-mounted radar runs on an unmanned vehicle is input into the model, and then a predicted segmentation result can be output, so that the segmentation of the distorted point cloud is realized, and fig. 3 is a result graph of the segmentation of the distorted point cloud. It can be seen that the method can realize accurate segmentation on the simulated distorted point cloud.
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.
Claims (4)
1. A distorted point cloud segmentation method based on a neural network is characterized in that the method utilizes the neural network to extract normalized information of point clouds in different spaces and segments the distorted point cloud; the method comprises the following steps:
A. constructing a depth network model for vehicle-mounted distortion point cloud segmentation, comprising the following steps of: a point cloud lifting network module; a space point cloud standardization network module is promoted; a code prediction network module;
A1. the point cloud lifting network module is used for lifting point cloud characteristic dimensions and accurately learning a normalized matrix;
the input of the point cloud promoting network module is a point cloud P of an original space, the point cloud P comprises a plurality of space points, wherein each point PiRepresented by three-dimensional spatial coordinates; the conversion of the point cloud from an original space to a lifting space is realized through a lifting function; for each point pi=[x,y,z]And the coordinates of the high-dimensional point after the dimensionality is expressed as:
v(P)=v(x,y,z)=[x2,xy,xz,y2,yz,z2]
wherein the content of the first and second substances,the lifting function is used for lifting the space dimension of the point cloud, and meanwhile, each point is expressed into a higher-order monomial form composed of original coordinates;
outputting the point cloud v (P) of the lifted high-dimensional space by the point cloud lifting network module;
A2. a space point cloud standardization network module is promoted;
in the training stage, the input of the module is a point cloud v (P) of the elevated high-dimensional space; the output is the point cloud v' (P) of the normalized lifting space; in the testing stage, distorted point cloud data acquired by a corresponding vehicle-mounted radar is input, and data obtained by correcting the distorted point cloud data is output;
firstly, extracting geometric characteristics of point cloud from three-dimensional coordinates of the point cloud in a rising-dimension space, and further learning a normalized matrix by using the characteristics; the structure of the space point cloud promotion normalized network module is as follows:
θP=Resize(Maxpool{ltlt-1...l1(v(P)))
wherein v (P) is a high-dimensional spatial point cloud,representing the learned normalized matrix. ltlt-1...l1Is expressed as { li}i=1,2,...,tDenotes the use of tA multi-tier perceptron structure of tiers, each tier using a batch normalization and activation function;
inputting the features extracted from the point cloud by the multilayer perceptron into a pooling layer Maxpool to pick out key parameters, and finally arranging the key parameters in sequence to obtain a 6 multiplied by 6 normalized matrix theta in a high-dimensional spaceP。
Using a normalized matrix thetaPAnd carrying out distortion correction on the high-dimensional space point cloud, and expressing as follows:
v′(P)=θP*v(P)
obtaining a point cloud v' (P) of the normalized high-dimensional space;
A3. the coding prediction network module is used for coding the normalized high-dimensional space point cloud v' (P) and further predicting a final segmentation result through a coding value;
the input of the coding prediction network module is v' (P), the output is a final segmentation result L, and the type of each predicted point in the corresponding point cloud is obtained; the coding prediction network module comprises 4 perception layers, and consists of 1 pooling layer and 3 full-connection layers; and each layer uses batch normalization and activation functions;
B. training a depth network model for vehicle-mounted distorted point cloud segmentation by using a distortion-free point cloud data set to generate a point cloud segmentation model; the method comprises the following steps:
B1) pair-wise input point cloud data and corresponding segmentation labels;
B2) utilizing a network module to perform segmentation prediction on the input point cloud;
B3) dynamically adjusting each module parameter according to the segmented prediction result and the truth value label;
B4) after training, generating a trained point cloud segmentation model;
C. segmenting a distorted point cloud generated by the vehicle-mounted radar by using a trained point cloud segmentation model;
C1) simulating a distortion point cloud possibly generated by a vehicle-mounted radar in the unmanned driving process;
C2) segmenting the distorted point cloud data by using a trained point cloud segmentation model;
through the steps, the distortion point cloud segmentation based on the neural network is realized.
2. The distorted point cloud segmentation method based on neural network as claimed in claim 1, wherein the input of the point cloud lifting network module is specifically point cloud P of original space with size of 1024 x 3, and the distorted point cloud segmentation method is characterized in that 1024 space point coordinates P1,p2,...,pi,...,p1024Forming; the output of the network module is the point cloud v (p) of the lifting space with size 1024 × 6.
3. The distorted point cloud segmentation method based on neural network as claimed in claim 1, wherein the point cloud data set in the training stage uses a complete point cloud without distortion, including a 1024 x 3 point cloud object P, composed of 1024 three-dimensional point coordinates; the point cloud object P comprises vehicles, roads, trees and signs, the segmentation label L corresponding to each object is 1024 multiplied by 1, and the classification corresponds to the specific detailed classification of each point.
4. The distorted point cloud segmentation method based on the neural network as claimed in claim 1, wherein distorted point cloud data generated when a simulated vehicle-mounted radar runs on an unmanned vehicle is used as point cloud data to be segmented and is input into a trained point cloud segmentation network model; the distortion point cloud data is affine distortion in unmanned driving, and comprises random rotation, translation, shearing and scaling along three coordinate axes, and distortion caused by random point missing and noise points in the vehicle-mounted radar acquisition process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011344486.2A CN112561922B (en) | 2020-11-26 | 2020-11-26 | Distortion three-dimensional point cloud segmentation method based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011344486.2A CN112561922B (en) | 2020-11-26 | 2020-11-26 | Distortion three-dimensional point cloud segmentation method based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112561922A true CN112561922A (en) | 2021-03-26 |
CN112561922B CN112561922B (en) | 2024-03-01 |
Family
ID=75045030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011344486.2A Active CN112561922B (en) | 2020-11-26 | 2020-11-26 | Distortion three-dimensional point cloud segmentation method based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112561922B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10229330B2 (en) * | 2016-01-27 | 2019-03-12 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for detecting vehicle contour based on point cloud data |
CN109685848A (en) * | 2018-12-14 | 2019-04-26 | 上海交通大学 | A kind of neural network coordinate transformation method of three-dimensional point cloud and three-dimension sensor |
CN111311614A (en) * | 2020-03-27 | 2020-06-19 | 西安电子科技大学 | Three-dimensional point cloud semantic segmentation method based on segmentation network and countermeasure network |
-
2020
- 2020-11-26 CN CN202011344486.2A patent/CN112561922B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10229330B2 (en) * | 2016-01-27 | 2019-03-12 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for detecting vehicle contour based on point cloud data |
CN109685848A (en) * | 2018-12-14 | 2019-04-26 | 上海交通大学 | A kind of neural network coordinate transformation method of three-dimensional point cloud and three-dimension sensor |
CN111311614A (en) * | 2020-03-27 | 2020-06-19 | 西安电子科技大学 | Three-dimensional point cloud semantic segmentation method based on segmentation network and countermeasure network |
Non-Patent Citations (1)
Title |
---|
杨锦发;晏磊;赵红颖;陈瑞;张瑞华;施柏鑫;: "融合粗糙深度信息的低纹理物体偏振三维重建", 红外与毫米波学报, no. 06 * |
Also Published As
Publication number | Publication date |
---|---|
CN112561922B (en) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10733755B2 (en) | Learning geometric differentials for matching 3D models to objects in a 2D image | |
EP3289529B1 (en) | Reducing image resolution in deep convolutional networks | |
CN109726627B (en) | Neural network model training and universal ground wire detection method | |
CN111191583B (en) | Space target recognition system and method based on convolutional neural network | |
JP6742554B1 (en) | Information processing apparatus and electronic apparatus including the same | |
US11182644B2 (en) | Method and apparatus for pose planar constraining on the basis of planar feature extraction | |
US11475276B1 (en) | Generating more realistic synthetic data with adversarial nets | |
Rani et al. | Object detection and recognition using contour based edge detection and fast R-CNN | |
CN113421269A (en) | Real-time semantic segmentation method based on double-branch deep convolutional neural network | |
CN108428248B (en) | Vehicle window positioning method, system, equipment and storage medium | |
Cai et al. | Night-time vehicle detection algorithm based on visual saliency and deep learning | |
CN117157678A (en) | Method and system for graph-based panorama segmentation | |
CN111860427B (en) | Driving distraction identification method based on lightweight class eight-dimensional convolutional neural network | |
US20220156528A1 (en) | Distance-based boundary aware semantic segmentation | |
CN114830131A (en) | Equal-surface polyhedron spherical gauge convolution neural network | |
WO2020102772A1 (en) | Coordinate estimation on n-spheres with spherical regression | |
CN114037640A (en) | Image generation method and device | |
CN114241459B (en) | Driver identity verification method and device, computer equipment and storage medium | |
CN110991377A (en) | Monocular visual neural network-based front target identification method for automobile safety auxiliary system | |
CN110557636A (en) | Lossy data compressor for vehicle control system | |
CN112561922B (en) | Distortion three-dimensional point cloud segmentation method based on neural network | |
Wang | Research on the Optimal Machine Learning Classifier for Traffic Signs | |
CN115271037A (en) | Point cloud-oriented high-efficiency binarization neural network quantization method and device | |
Jaiswal et al. | Empirical analysis of traffic sign recognition using ResNet architectures | |
CN111768493A (en) | Point cloud processing method based on distribution parameter coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |