CN111079652B - 3D target detection method based on point cloud data simple coding - Google Patents

3D target detection method based on point cloud data simple coding Download PDF

Info

Publication number
CN111079652B
CN111079652B CN201911306018.3A CN201911306018A CN111079652B CN 111079652 B CN111079652 B CN 111079652B CN 201911306018 A CN201911306018 A CN 201911306018A CN 111079652 B CN111079652 B CN 111079652B
Authority
CN
China
Prior art keywords
grid
points
feature
feature map
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911306018.3A
Other languages
Chinese (zh)
Other versions
CN111079652A (en
Inventor
李炜
宁亚光
杨明
董铮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201911306018.3A priority Critical patent/CN111079652B/en
Publication of CN111079652A publication Critical patent/CN111079652A/en
Application granted granted Critical
Publication of CN111079652B publication Critical patent/CN111079652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a 3D target detection method based on simple point cloud data coding, which comprises the steps of rasterizing point cloud data, then completing coding of a point set in a single grid by calculating geometrical information and density information in the single grid, performing efficient feature dimension reduction by means of feature splicing and MXN convolution, finally constructing a two-dimensional feature map based on the point cloud data and applicable to a convolutional neural network, and finally performing feature extraction and 3D target detection by adopting a set of multi-scale convolutional feature extraction network. The method can efficiently reduce the dimension of the 3D characteristic diagram into the 2D characteristic diagram, so that the method can be applied to different 2D convolution neural networks for characteristic extraction and 3D target detection.

Description

3D target detection method based on point cloud data simple coding
Technical Field
The invention relates to the technical field of laser radar data processing and target identification, in particular to a method for rasterizing point cloud data into a two-dimensional feature map based on a laser radar.
Background
For a vehicle-mounted intelligent computing platform supporting and realizing an automatic driving function, a laser radar is an important device for sensing the surrounding environment of a vehicle. 3D target detection based on laser radar point cloud data is an important way for realizing 3D perception. 3D object detection refers to detecting the class and specific 3D position of objects of the surrounding environment. The laser radar is an important means for sensing the surrounding environment of vehicles and robots, and comprises a laser transmitting system, a laser receiving system and a rotating component, wherein the laser transmitting system generally consists of a single-beam multi-line narrow-band laser, the laser transmits laser pulses in a certain frequency direction, and the laser pulses are positioned on the vehicle and the robotsIf the object is hit on the surface in the attenuation distance, the reflection will be reflected back and finally received by the receiving system. The rotating assembly continuously rotates to enable the single-beam multi-line laser pulse to achieve acquisition of 360-degree ambient environment information, the transmitting frequency of the transmitter can reach millions of pulses per second, meanwhile, the receiver can also receive laser points reflected by the pulses in corresponding time, and a point cloud characteristic diagram capable of outlining the ambient environment is formed by a large number of laser points. Wherein the feature of any single point is denoted as pi=(xi,yi,zi,ri),xi,yi,ziRespectively X, Y, Z space coordinate valuesiIs the reflected intensity. Coordinate description is carried out through a large number of point sets, so that the point cloud data can be applied to different perception methods, and 3D perception of the surrounding environment is achieved.
According to the working characteristics of the laser radar, the laser pulse moves along a straight line, the speed of the known light is determined, and the straight line distance between the surface of the object and the emitting point can be obtained according to the time difference between the emitting time and the receiving time. Meanwhile, by combining the emission angle of the laser pulse, if the center of the laser radar is used as the origin of the coordinate system, X, Y, Z relative coordinate information with accurate laser reflection points can be obtained. Thereby, the accurate spatial information of the surrounding environment can be restored.
However, due to the special sensing characteristic of the laser radar, point cloud data generated by the laser radar has the characteristics of sparseness, disorder and noise, and the sparseness is realized in two aspects, on one hand, due to the fact that laser pulses generated by the laser radar in unit time are limited, peripheral obstacle information is subjected to discrete sampling, and the characteristic of the discrete sampling causes that local point cloud characteristics formed on the surface of an object have the characteristic of sparseness; on the other hand, space obstacles are sparse compared with the whole space. Compared with natural images generated by sensors such as cameras, the point cloud data is not related to the order of points when describing the spatial characteristics. In addition, when the laser radar receives laser pulses, a few noise reflection points exist, and partially isolated and wrong noise points exist in formed point cloud data. Therefore, it is important to design a proper perception algorithm for 3D object detection. Most well-behaved object detection algorithms are based on natural images, such as convolutional neural networks. However, due to the characteristics of the lidar point cloud data, it is difficult to apply these correlation algorithms, and in order to use a target detection algorithm based on a convolutional neural network, the point cloud data may be encoded by rasterization, and the point cloud data is usually converted into a feature pattern form similar to a natural image. However, since the encoded point cloud data generally forms a 3D feature map, compared with a natural image, a 3D convolutional neural network is required, and the 3D convolutional neural network has a huge calculation amount, so that the feature map dimension reduction is required after encoding, and the 3D feature map dimension reduction is performed to obtain a 2D feature map, which can be better applied to the convolutional neural network, thereby realizing the 3D target detection method based on the laser radar point cloud data.
Disclosure of Invention
Aiming at the current research situation and the actual demand, the invention provides a 3D target detection method based on simple point cloud data coding, which comprises the steps of rasterizing the point cloud data, then completing coding of a point set in a single grid by calculating geometrical information and density information in the single grid, performing efficient feature dimensionality reduction by means of feature splicing and 1 x 1 convolution, finally constructing a two-dimensional feature map based on the point cloud data and applicable to a convolutional neural network, and finally performing feature extraction and 3D target detection by adopting a set of multi-scale convolutional feature extraction network. The method can efficiently reduce the dimension of the 3D characteristic diagram into the 2D characteristic diagram, thereby being applied to different 2D convolution neural networks for characteristic extraction and 3D target detection.
The specific technical scheme of the invention is as follows:
A3D target detection method based on point cloud data simple coding comprises the following steps:
s1 point cloud data reflected by the laser radar is received and stored in a memory of a computer, wherein the point cloud data is composed of a large number of points, and in the laser radar coordinate system, the characteristics of each point are represented by space coordinate values of X, Y, Z axes and reflection intensity r;
s2, selecting a target area according to the requirement for the point cloud data, and discarding points which are not in the target area;
s3 rasterizing the selected target region into a rasterized space having a three-dimensional grid;
s4, for each rasterized grid, randomly selecting m points from a point set in the grid to calculate geometric information and density information, using the m points as feature vectors of the grid, encoding all grids, and encoding rasterized point cloud data into a 3D feature map;
s5, splicing all the feature vectors subjected to grid coding according to the division in the height direction in the rasterization process, so that the dimension of the 3D feature graph is reduced to be a 2D feature graph;
s6, carrying out M × N convolution operation on the 2D feature map, wherein the value of M is 1-3, the value of N is 1-3, the height information is coded into the channel information, and the 2D feature map used for the convolutional neural network is generated;
s7, fully extracting the features of the 2D feature map generated in the step S6 by adopting a feature extraction network based on a convolutional neural network;
s8 implements 3D object detection based on the 2D feature map of feature extraction obtained in step S7.
Preferably, the step S3 specifically includes:
s31, selecting a proper grid size for the selected target area, and rasterizing the three-dimensional space corresponding to the target area to form an L multiplied by W multiplied by H rasterized target area, wherein L, W, H is the number of grids of the target area in the X, Y, Z axis direction respectively;
s32 assigns the points in the selected target region to corresponding grids according to their spatial coordinate values.
Preferably, the step S4 specifically includes:
s41, regarding the three-dimensional grids formed after the rasterization, enabling the point set distributed in each grid to be T, and if T is not empty, executing the step S42;
s42, randomly selecting m points for the point set T in each grid, and when the total number of the points in the grid is less than m, selecting all the points in the grid to form a point set T' to be edited in a single grid;
s43, extracting the coordinate values of the points in the point set T' for further coding;
s44, for each point set T 'to be coded in the grid, calculating the geometrical information of T', wherein the geometrical information is the maximum coordinate value, the minimum coordinate value and the maximum value and the minimum value of the reflection intensity of the points in the point set on each coordinate axis.
S45, respectively calculating density information of a point set T 'to be coded in each grid, wherein the density information is the mean value of coordinate values of points in the point set on each coordinate axis, the mean value of reflection intensity and the total number of points in T or T';
s46, summarizing the geometric information and the density information of each grid to form a feature vector after the corresponding grid is coded;
s47 encodes a feature vector having the same value as 0 for a lattice having no point in the three-dimensional lattice formed by the rasterization.
Preferably, the S5 specifically includes:
s51, for an L multiplied by W multiplied by H3D feature map with channel information formed after grid coding, the channel information is a feature vector after grid coding, and the length is marked as C;
s52, dividing the 3D feature map along the Z-axis direction, splicing the channel information to perform dimensionality reduction on the 3D feature map to form a 2D feature map of L multiplied by W, wherein the length of the channel information is changed into H multiplied by C.
Preferably, the step S6 specifically includes:
and selecting K convolution kernels with the convolution kernel size of 1 × 1 for the 2D feature map spliced in the step S52, and encoding the channel information with the length of H × C into the channel information with the length of K by using the step size of 1 to generate the 2D feature map for the convolution neural network.
The invention has the beneficial effects that:
(1) the geometric information and the density information are used for simply coding the point cloud data in the grid, so that on one hand, better coding performance is kept as far as possible; on the other hand, the coding speed is greatly increased. The coding efficiency is improved.
(2) The dimension of the three-dimensional characteristic diagram is reduced into a two-dimensional characteristic diagram based on a characteristic splicing mode along the height direction, so that the speed of reducing the dimension of the characteristic is greatly increased.
(3) The height information is coded into the channel information in a mode of a 1 × 1 convolutional neural network, so that on one hand, better feature expression capability is reserved; on the other hand, the number of the convolution kernels is controlled, so that the method has great significance for reducing the number of channels of the final feature map.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only embodiments of the invention, and that for a person skilled in the art, other drawings can be obtained from the provided drawings without inventive effort.
FIG. 1 is a schematic flow chart of a 3D target detection method based on simple point cloud data encoding according to the present invention;
FIG. 2 is a schematic diagram of rasterizing point cloud data in accordance with the present invention;
FIG. 3 is a flow chart of the present invention for encoding a single set of points within a grid;
FIG. 4 is a schematic diagram of the present invention encoding a single set of points in a grid;
FIG. 5 is a diagram illustrating the dimension reduction of the rasterized feature stitching according to the present invention;
FIG. 6 is a schematic diagram of a feature extraction network of the present invention;
fig. 7 is a diagram of the configuration of the SECOND object detection network of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only illustrative and are not intended to limit the present application.
An embodiment of the present invention will be described in detail below with reference to fig. 1.
Fig. 1 is a schematic flow chart of a 3D target detection method based on simple point cloud data encoding according to the present invention. In the present embodiment, the method includes the steps of:
s1 the vehicle-mounted computing platform receives the point cloud data input by the laser radar, the point cloud is a collection of isolated points, and P is usedcloudTo represent the set of all points, P, of a frame recordcloud={p1,p2,p3,......pi,......,pnIn which p isiIs PcloudAt any point in the network, each point is characterized by a specific characteristic pi=[xi,yi,zi,ri]Respectively, the coordinate value x of the point in the laser radar coordinate systemi,yi,ziAnd reflection intensity ri
S2 Point cloud data P inputcloudSelecting a target area: x is an element of [ X ∈ ]1,X2],y∈[Y1,Y2],z∈[Z1,Z2]Wherein [ X ]1,X2]Is shown in the X coordinate axis and the X coordinate axis under the coordinate axis1To start with, X2Is the ending interval; [ Y ]1,Y2]Is represented by Y coordinate axis below the coordinate axis and Y on the Y coordinate axis1To start with, Y2Is the ending interval; [ Z ]1,Z2]Is shown in the Z coordinate axis below the coordinate axis and in Z1To start with, Z2The end interval. When point cloud PcloudAt any point pi=[xi,yi,zi,ri]When x isi,yiAnd z belongs to the interval, selecting a target area point cloud set P'cloudOtherwise, discarding.
S3 rasterizes the selected target region.
S31 selecting a grid size (l) for the target regiona,wa,ha) Rasterization is performed, wherein: laThe size of the grid on the X coordinate axis; w is aaIs the grid size on the Y coordinate axis; h isaIs the grid size on the Z coordinate axis. Forming an L W H three-dimensional grid. Wherein: l refers to the number of grids on the X coordinate axis; w on the Y axisThe number of grids; h denotes the number of grids on the Z coordinate axis. Sl,w,hRepresenting any one of the grids in the rasterized space, wherein the subscriptsl,w,hThe values are indices on different coordinate axes.
S32 target area point cloud collection P'cloudAnd distributing the grid data to each grid after rasterization according to the coordinate relation. Wherein any one of the grids SiThe point set in (1) is Ti
S4 extracts set geometry information and density information for a grid containing a set of points, and encodes the set geometry information and density information to generate a length C eigenvector, as shown in fig. 3. If there is no point in the grid, the code is a feature vector with the same length value as 0.
S41 for any grid SiThe set of points in the grid is TiIf T isiIf not empty, then Ti={pi1,pi2,......,pij,,pigIn which p isij=[xij,yij,zij,rij]Is TiAnd at any point in the point set, the subscript j is the index value of the point, and the subscript g is the total number of the points in the point set. [ x ] ofij,yij,zij,rij]The coordinate value and the reflection intensity of the point are respectively.
S42 for TiSetting a fixed value m when g>m, randomly selecting m points, when g<When m, all g points are selected to form a new point set Ti' for encoding.
S43 Point set T to be codedi′={p′i1,p′i2,......,p′ij,......,p′iv}, wherein: p'ij=(x′ij,y′ij,z′ij,r′ij) As a set of points Ti' any point in, v is the set of points TiTotal number of interior points, when g>When m, v ═ m; when g is<When m, v is g. For point set Ti'={p′i1,p′i2,......,p′ij,......,p′iv}, can be written as follows:
Figure BDA0002323108260000071
s44 pairs of Ti'={p′i1,p′i2,......,p′ij,......,p′ivPoints within, find the maximum and minimum coordinate values on the X, Y, Z axis, and the maximum and minimum values of the reflection intensity. As geometric information for the set of points within the grid. The specific calculation method is as follows:
Figure BDA0002323108260000072
s45 pairs of Ti'={p′i1,p′i2,......,p′ij,......,p′ivMean value of coordinate values on axis X, Y, Z and mean value of reflection intensity, combining the point set Ti'={p′i1,p′i2,......,p′ij,......,p′ivThe number v of points in the grid is used as the density information of the point set in the grid. The specific calculation method is as follows:
Figure BDA0002323108260000073
s46, the geometric information and the density information obtained by each grid are spliced to form a feature vector
Figure BDA0002323108260000074
As a characteristic expression of the grid.
S47, for the grid without points in the grid, coding the grid into a feature vector with equal length and 0 value
f=[0,0,0,0,0,0,0,0,0,0,0,0,0]。
S5 is the encoding described above, and can encode the three-dimensional grid L × W × H formed by rasterization into a 3D feature map of size L × W × H, where the channel length is the length C of the feature vector. Performing feature map dimension reduction on the 3D feature map, and reducing the feature map with the channel information length of C and the size of L multiplied by W multiplied by H into a 2D feature map with the channel length of H multiplied by C and the size of L multiplied by W, wherein the specific method is as follows:
the size of the three-dimensional grid formed after rasterization is L multiplied by W multiplied by H, and after encoding, the length of the feature vector of each grid is C, wherein the grid is divided into H grids along the Z-axis direction. And splicing the channel information along the Z-axis direction to form H multiplied by C channel information, wherein the size of the characteristic diagram is changed into L multiplied by W. The splicing pattern is shown in fig. 5.
S6 uses K convolution kernels with convolution size 1 × 1 and step size 1. And coding the channel information with the length of H multiplied by C into the channel information with the length of K, and keeping the size of the characteristic diagram unchanged to L multiplied by W to form a 2D characteristic diagram which can be used for the 2D convolutional neural network and has the size of L multiplied by W and the number of channels of K.
S7 performs feature extraction by using a multi-scale convolutional neural network with 13 convolutional layers, and the network structure is shown in fig. 6.
S8, performing regression classification and direction classification of the detection frame by adopting a detection layer design based on the SECOND target detection network, and completing the final 3D target detection.
Although exemplary embodiments of the present invention have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions, substitutions and the like can be made in form and detail without departing from the scope and spirit of the invention as disclosed in the accompanying claims, all of which are intended to fall within the scope of the claims, and that various steps in the various sections and methods of the claimed product can be combined together in any combination. Therefore, the description of the embodiments disclosed in the present invention is not intended to limit the scope of the present invention, but to describe the present invention. Accordingly, the scope of the present invention is not limited by the above embodiments, but is defined by the claims or their equivalents.

Claims (5)

1. A3D target detection method based on point cloud data simple coding is characterized by comprising the following steps:
s1 point cloud data reflected by the laser radar is received and stored in a memory of a computer, wherein the point cloud data is composed of a large number of points, and in a laser radar coordinate system, the characteristics of each point are represented by space coordinate values of X, Y, Z axes and reflection intensity r;
s2, selecting a target area according to the requirement for the point cloud data, and discarding points which are not in the target area;
s3 rasterizing the selected target region into a rasterized space having a three-dimensional grid;
s4, for each rasterized grid, randomly selecting m points from a point set in the grid to calculate geometric information and density information, using the m points as feature vectors of the grid, encoding all grids, and encoding rasterized point cloud data into a 3D feature map;
s5, splicing all the feature vectors subjected to grid coding according to the division in the height direction in the rasterization process, so that the dimension of the 3D feature graph is reduced to be a 2D feature graph;
s6, carrying out M × N convolution operation on the 2D feature map, wherein the value of M is 1-3, the value of N is 1-3, the height information is coded into the channel information, and the 2D feature map used for the convolutional neural network is generated;
s7, fully extracting the features of the 2D feature map generated in the step S6 by using a feature extraction network based on a convolutional neural network;
s8 implements 3D object detection based on the 2D feature map of the feature extraction obtained in step S7.
2. The method according to claim 1, wherein the step S3 specifically includes:
s31, selecting a proper grid size for the selected target area, and rasterizing a three-dimensional space corresponding to the target area to form an L multiplied by W multiplied by H rasterized target area, wherein L, W, H is the number of grids of the target area in the X, Y, Z axis direction respectively;
and S32, aiming at the points in the selected target area, distributing the points to the corresponding grids according to the space coordinate values.
3. The method according to claim 2, wherein the step S4 specifically includes:
s41, regarding the three-dimensional grids formed after the rasterization, enabling the point set distributed in each grid to be T, and if the T is not empty, executing the step S42;
s42, randomly selecting m points for the point set T in each grid, and when the total number of the points in the grid is less than m, selecting all the points in the grid to form a point set T' to be coded in a single grid;
s43, extracting the coordinate values of the points in the point set T' for further coding;
s44, calculating the geometric information of T 'for the point set T' to be coded in each grid, wherein the geometric information is the maximum coordinate value, the minimum coordinate value and the maximum value and the minimum value of the reflection intensity of the points in the point set on each coordinate axis;
s45, respectively calculating density information of the point set T 'to be coded in each grid, wherein the density information is the mean value of coordinate values of points in the point set on each coordinate axis, the mean value of reflection intensity and the total number of points in T or T';
s46, summarizing the geometric information and the density information of each grid, and adopting all or part of the summarized information to form a feature vector after the corresponding grid is coded;
s47, encoding a lattice having no point in the three-dimensional lattice formed by the rasterization into a feature vector having an equal value of 0.
4. The method according to claim 3, wherein the S5 specifically comprises:
s51, forming an L multiplied by W multiplied by H3D feature map with channel information after grid coding, wherein the channel information is a feature vector after grid coding, and the length is recorded as C;
and S52, dividing the 3D feature map along the Z-axis direction, splicing the channel information to reduce the dimension of the 3D feature map to form a 2D feature map of L multiplied by W, wherein the length of the channel information is H multiplied by C.
5. The method according to claim 4, wherein the step S6 specifically includes:
and selecting K convolution kernels with the convolution kernel size of 1 × 1 for the 2D feature map spliced in the step S52, and encoding the channel information with the length of H × C into the channel information with the length of K by using the step size of 1 to generate the 2D feature map for the convolution neural network.
CN201911306018.3A 2019-12-18 2019-12-18 3D target detection method based on point cloud data simple coding Active CN111079652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911306018.3A CN111079652B (en) 2019-12-18 2019-12-18 3D target detection method based on point cloud data simple coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911306018.3A CN111079652B (en) 2019-12-18 2019-12-18 3D target detection method based on point cloud data simple coding

Publications (2)

Publication Number Publication Date
CN111079652A CN111079652A (en) 2020-04-28
CN111079652B true CN111079652B (en) 2022-05-13

Family

ID=70315304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911306018.3A Active CN111079652B (en) 2019-12-18 2019-12-18 3D target detection method based on point cloud data simple coding

Country Status (1)

Country Link
CN (1) CN111079652B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971221A (en) * 2020-07-22 2022-01-25 上海商汤临港智能科技有限公司 Point cloud data processing method and device, electronic equipment and storage medium
CN113873220A (en) * 2020-12-03 2021-12-31 上海飞机制造有限公司 Deviation analysis method, device, system, equipment and storage medium
CN113986504A (en) * 2021-10-29 2022-01-28 上海商汤临港智能科技有限公司 Point cloud data processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN109753885A (en) * 2018-12-14 2019-05-14 中国科学院深圳先进技术研究院 A kind of object detection method, device and pedestrian detection method, system
CN109828592A (en) * 2019-04-22 2019-05-31 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of detection of obstacles
CN109932730A (en) * 2019-02-22 2019-06-25 东华大学 Laser radar object detection method based on multiple dimensioned monopole three dimensional detection network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN109753885A (en) * 2018-12-14 2019-05-14 中国科学院深圳先进技术研究院 A kind of object detection method, device and pedestrian detection method, system
CN109932730A (en) * 2019-02-22 2019-06-25 东华大学 Laser radar object detection method based on multiple dimensioned monopole three dimensional detection network
CN109828592A (en) * 2019-04-22 2019-05-31 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of detection of obstacles

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automated Detection of Road Manhole and Sewer Well Covers From Mobile LiDAR Point Clouds;Yongtao Yu 等;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20140930;第11卷(第9期);全文 *
基于激光雷达的道路可通行区域检测;肖已达 等;《机电一体化》;20130228;全文 *

Also Published As

Publication number Publication date
CN111079652A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
Cong et al. Underwater robot sensing technology: A survey
CN111079652B (en) 3D target detection method based on point cloud data simple coding
Zamanakos et al. A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving
Homm et al. Efficient occupancy grid computation on the GPU with lidar and radar for road boundary detection
US8467628B2 (en) Method and system for fast dense stereoscopic ranging
WO2018125928A1 (en) Multi-channel sensor simulation for autonomous control systems
TW202240199A (en) High end imaging radar
KR102372703B1 (en) Learning method and learning device for integrating object detection information acquired through v2v communication from other autonomous vehicle with object detection information generated by present autonomous vehicle, and testing method and testing device using the same
US11810311B2 (en) Two-stage depth estimation machine learning algorithm and spherical warping layer for equi-rectangular projection stereo matching
CN113284163B (en) Three-dimensional target self-adaptive detection method and system based on vehicle-mounted laser radar point cloud
CN113267761B (en) Laser radar target detection and identification method, system and computer readable storage medium
CN112446227A (en) Object detection method, device and equipment
CN113688738B (en) Target identification system and method based on laser radar point cloud data
Ouyang et al. A cgans-based scene reconstruction model using lidar point cloud
CN111209840A (en) 3D target detection method based on multi-sensor data fusion
Xie et al. Inferring depth contours from sidescan sonar using convolutional neural nets
Wang et al. Elevation angle estimation in 2d acoustic images using pseudo front view
Walz et al. Uncertainty depth estimation with gated images for 3D reconstruction
Xie et al. Neural network normal estimation and bathymetry reconstruction from sidescan sonar
CN114155414A (en) Novel unmanned-driving-oriented feature layer data fusion method and system and target detection method
US20220138978A1 (en) Two-stage depth estimation machine learning algorithm and spherical warping layer for equi-rectangular projection stereo matching
CN117237919A (en) Intelligent driving sensing method for truck through multi-sensor fusion detection under cross-mode supervised learning
CN115620263B (en) Intelligent vehicle obstacle detection method based on image fusion of camera and laser radar
CN114973181B (en) Multi-view BEV (beam steering angle) visual angle environment sensing method, device, equipment and storage medium
CN116168384A (en) Point cloud target detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant