CN114140470A - Ground object semantic segmentation method based on helicopter airborne laser radar - Google Patents

Ground object semantic segmentation method based on helicopter airborne laser radar Download PDF

Info

Publication number
CN114140470A
CN114140470A CN202111484048.0A CN202111484048A CN114140470A CN 114140470 A CN114140470 A CN 114140470A CN 202111484048 A CN202111484048 A CN 202111484048A CN 114140470 A CN114140470 A CN 114140470A
Authority
CN
China
Prior art keywords
point cloud
dimensional
helicopter
laser radar
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111484048.0A
Other languages
Chinese (zh)
Inventor
范锐军
陈潇
付康林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qunzhou Technology Shanghai Co ltd
Original Assignee
Qunzhou Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qunzhou Technology Shanghai Co ltd filed Critical Qunzhou Technology Shanghai Co ltd
Priority to CN202111484048.0A priority Critical patent/CN114140470A/en
Publication of CN114140470A publication Critical patent/CN114140470A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to a ground object semantic segmentation method based on a helicopter airborne laser radar, which relates to the technical field of real-time processing of the helicopter airborne laser radar and comprises the following steps: the method comprises the steps of obtaining real-time point cloud through a laser radar, collecting point cloud data, enabling the collected point cloud data to enter a down-sampling module, reducing the data volume of the point cloud while keeping the characteristics of the point cloud, obtaining effective data, carrying out preprocessing on the effective data obtained after down-sampling to encode three-dimensional point cloud into a two-dimensional aerial view, dividing points into places and object points in a deep learning network mode, carrying out obstacle detection only in the object points, dividing an object point set into a plurality of isolated point cloud clusters, classifying by using the characteristics of the point cloud clusters, calculating external polyhedrons of the clustered point cloud belonging to the same target, and distinguishing objects to be detected. The helicopter flight safety detection device has the advantages that the requirements of real-time performance and low power consumption can be met, the detection performance is better, and the safety requirement of helicopter flight is guaranteed.

Description

Ground object semantic segmentation method based on helicopter airborne laser radar
Technical Field
The invention relates to the technical field of real-time processing of helicopter airborne laser radars, in particular to a ground object semantic segmentation method based on the helicopter airborne laser radars.
Background
The laser radar has the advantages of high measurement precision, fine time and space resolution, long measurement distance and the like, the helicopter can fly and lift, hover, pitch, deflect and other various complex actions with low altitude, low speed and unchanged nose direction, has the characteristics of flexibility and is very suitable for measuring various complex terrains. Therefore, helicopter-mounted laser radars are increasingly widely applied in many aspects such as geodetic surveying, forest exploration, urban modeling, disaster assessment and the like.
With the rapid development of artificial intelligence and the field of helicopters, scene understanding is crucial to the safety and effectiveness of machine automatic perception in complex dynamic scenes. Various sensors are generally equipped in helicopters, and particularly lidar sensors play an important role in understanding the visual environment, and lidar systems are used for collecting sparse 3D point clouds to reconstruct the environment in the actual scene and help automatic systems make decisions to better understand the scene, so scene semantic understanding of the point clouds is crucial to helicopter flight; meanwhile, researches find that the ground can provide useful information, ambiguity caused by data sparsity is effectively eliminated, and the relation between an object and the ground is beneficial to semantic segmentation and prediction, so that the important points of the researches are how to effectively segment point cloud semantics of a helicopter flight scene and how to segment the ground and obtain the relation between the object and the ground.
The existing methods are classified into two types based on deep learning and non-deep learning, and the method based on deep learning has high calculation force demand and is difficult to simultaneously meet the requirements of real-time performance and low power consumption under the condition of an airborne vehicle; the method based on non-deep learning has poor detection performance and is difficult to ensure the flight safety of the helicopter.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The invention aims to provide a ground object semantic segmentation method based on an airborne laser radar of a helicopter, which can meet the requirements of real-time performance and low power consumption at the same time, has better detection performance and ensures the safety requirement of the flight of the helicopter.
The invention provides a ground object semantic segmentation method based on a helicopter airborne laser radar, which is characterized by comprising the following steps of:
s1: acquiring real-time point cloud through a laser radar, collecting point cloud data, and performing S2;
s2: the collected point cloud data enters a down-sampling module, the point cloud characteristics are kept, meanwhile, the point cloud data amount is reduced, effective data are obtained, and the step S3 is carried out;
s3: preprocessing the effective data obtained after the downsampling to encode the three-dimensional point cloud into a two-dimensional aerial view, and entering the step S4;
s4: dividing the points into places and object points by using a deep learning network mode, detecting obstacles only in the object points, and entering the step S5;
s5: dividing the object point set into a plurality of isolated point cloud clusters by using an Euclidean clustering mode, and entering the step S6;
s6: and classifying by using the characteristics of the point cloud clusters, calculating external polyhedrons of the clustered point clouds belonging to the same target, and classifying the external polyhedrons according to the attributes of the external polyhedrons to obtain the objects to be detected.
Further, step S1 includes the steps of:
s11: sending a udp packet by the laser radar;
s12: the server end receives and unpacks the packet, and eliminates the dead points to perform coordinate system conversion;
s13: and finishing the acquisition of point cloud data.
Further, down-sampling is performed by voxel filtering in step S2.
Further, step S3 includes the steps of:
s31: encoding the three-dimensional point cloud into a two-dimensional aerial view within the roi range;
s32: and encoding the two-dimensional feature map by using the highest height value of the point clouds falling into the same grid and the median of the height of the point clouds falling into the same grid.
Further, the object to be detected comprises a building, a rod, a windmill, a high-voltage line tower and a signal tower.
Further, the step S3 further includes:
establishing an roi, wherein x is-128 m < x is less than or equal to 128m, z is 0< z is less than or equal to 256m in a laser radar coordinate system, converting the point cloud into a pixel coordinate system, counting by using a mode that the highest point and a median of the point cloud height falling into the same grid replace the value of the pixel according to a 0.5mX0.5m space, wherein the converted three-channel value is (x, y, z), and the converted five-channel value is (x, y, z, yaw, pitch), so that a two-dimensional characteristic diagram of five channels and one three channel is obtained and used as the input of the network.
Further, the yaw and the pitch are two angles that can be obtained in three dimensions, respectively.
Further, taking the two-dimensional feature map of the five channels in the step S3 as an input of the network, and obtaining a feature map by using a 1-by-1 convolution kernel with an output channel of 32 channels; and outputting a prediction species map by using a 32-channel point cloud three-dimensional coordinate tensor characteristic map through multiple characteristic extraction, multiple characteristic map splicing, multiple upsampling and loss function input, and obtaining a point cloud map by using a 1-channel point cloud three-dimensional coordinate tensor characteristic map through a nearest neighbor classification algorithm random condition field.
Further, the backbone specifically comprises the following steps:
and (3) performing three times of filling convolution on the three-channel two-dimensional characteristic diagram in the step (S3), dividing the three-channel two-dimensional characteristic diagram into a characteristic diagram with the channel number 9 times of the original channel number by the convolution kernel size of (3, 3) of the current backbone input characteristic diagram, multiplying the characteristic diagram by two points, obtaining the convolution result of the channel number of the input characteristic diagram by the two-dimensional convolution of (1, 1), and splicing the convolution result with the characteristic diagram of the original backbone input characteristic diagram by the two-dimensional convolution of (3, 3) to complete the backbone.
According to the ground object semantic segmentation method based on the helicopter-borne laser radar, the data amount required to be processed can be greatly reduced through ground object separation, the difficulty of subsequent target identification can be greatly reduced through accurate ground object separation, and therefore a deep learning method is required to be used; subsequent external polyhedron estimation and target classification can be carried out by using a non-deep learning method under the condition of limited computing resources, so that the real-time performance is ensured; the encoding method is different from the existing method, three-dimensional point cloud is encoded into a two-dimensional aerial view in the roi range, the two-dimensional feature map is encoded by using the highest height value (radar information z) of the point cloud falling into the same grid and the median of the point cloud height falling into the same grid, stable point cloud density can be kept in a range far away from the point cloud, and the requirement of a helicopter on the detection precision of far-end interest is met; in consideration of the characteristic of large visual field range under an airborne condition, structures for improving the visual field range, such as u-shaped structures and the like, are added in the original squeezeseg model.
Drawings
Fig. 1 is an algorithm block diagram of a ground object semantic segmentation method based on a helicopter airborne laser radar according to an embodiment of the present invention.
Fig. 2 is a flowchart of the ground object semantic segmentation method based on the helicopter airborne laser radar in fig. 1.
Fig. 3 is a processing module diagram of step S4 of the semantic segmentation method for ground targets based on laser radar on board the helicopter in fig. 1.
Fig. 4 is an overall processing module diagram of the ground object semantic segmentation method based on the helicopter airborne laser radar in fig. 1.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
The terms first, second, third, fourth and the like in the description and in the claims of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
Fig. 1 is an algorithm block diagram of a ground object semantic segmentation method based on a helicopter airborne laser radar according to an embodiment of the present invention. Referring to fig. 1, the method for semantic segmentation of ground objects based on an airborne laser radar of a helicopter provided by the embodiment of the present invention includes the following steps:
s1: acquiring real-time point cloud through a laser radar, collecting point cloud data, and performing S2;
s2: the collected point cloud data enters a down-sampling module, the point cloud characteristics are kept, meanwhile, the point cloud data amount is reduced, effective data are obtained, it is required to be noted that the processing time can be reduced through the down-sampling module, and the step S3 is carried out;
s3: preprocessing the effective data obtained after the downsampling to encode the three-dimensional point cloud into a two-dimensional aerial view, and entering the step S4;
s4: dividing the points into places and object points (points not in the earth) by using a deep learning network mode, detecting obstacles only in the object points, and entering the step S5;
s5: dividing the object point set into a plurality of isolated point cloud clusters by using an Euclidean clustering mode (clustering non-ground target point clouds), and entering the step S6;
s6: and classifying by using the characteristics of the point cloud clusters, calculating external polyhedrons of the clustered point clouds belonging to the same target, and classifying the external polyhedrons according to the attributes of the external polyhedrons to obtain the objects to be detected. Specifically, the object to be detected includes a building, a shaft, a windmill, a high-voltage line tower, a signal tower, and the like.
According to the ground object semantic segmentation method based on the helicopter-borne laser radar, the data amount required to be processed can be greatly reduced through ground object separation, the difficulty of subsequent target identification can be greatly reduced through accurate ground object separation, and therefore a deep learning method is required to be used; subsequent external polyhedron estimation and target classification can be carried out by using a non-deep learning method under the condition of limited computing resources, so that the real-time performance is ensured; under the condition that computing resources are limited, the deep learning network is not suitable for using a point cloud network with overlarge computing amount, and optimization is carried out based on the image network, so that point cloud data can be processed, and instantaneity is guaranteed.
Fig. 2 is a flowchart of the ground object semantic segmentation method based on the helicopter airborne laser radar in fig. 1. Referring to fig. 2, step S1 includes the following steps:
s11: the laser radar sends udp (user data packet protocol) packets;
s12: a server (receiving) end receives and unpacks a packet, and eliminates a dead point to carry out coordinate system conversion;
specifically, the dead spots include points where the distance or the reflection intensity is not reasonable and invalid points.
S13: and finishing the acquisition of point cloud data.
Further, in step S2, downsampling is performed through voxel filtering, so that the point cloud data amount is reduced while the point cloud features are effectively retained, and the processing time is reduced.
Step S3 includes the following steps:
s31: encoding the three-dimensional point cloud into a two-dimensional aerial view within the roi (region of interest);
s32: and encoding the two-dimensional feature map by using the highest height value of the point clouds falling into the same grid and the median of the height of the point clouds falling into the same grid.
It should be noted that, the foresight maps convert point clouds into two-dimensional feature maps:
firstly, establishing a roi (region of interest), for example, in a laser radar coordinate system, 128m < x < 128m < 0< z < 256m, only converting the point cloud in the range, converting the point cloud into a pixel coordinate system, and having a statistical value generation process, wherein a statistical unit (pixel of a reference picture) corresponds to a 0.5mx0.5m space, and the value of the pixel is replaced by a highest point (radar information z is minimum) and a median of the point cloud height falling into the same grid (three channels of an analog color picture are values of a color space, such as blue, green, and red components in a bgr color space), the converted three channel values are (x, y, z), the converted five channel values are (x, y, z, yaw, pitch), yaw and pitch are two three-dimensional calculated angles; two-dimensional characteristic maps of one five channel and one three channel are obtained as the input of the network.
Fig. 4 is an overall processing module diagram of the ground object semantic segmentation method based on the helicopter airborne laser radar in fig. 1. Referring to fig. 4, the two-dimensional feature map of the five channels in step S3 is used as an input of the network, and the output channel is 32 channels by a 1-by-1 convolution kernel to obtain a feature map; and outputting a prediction species map by using a 32-channel point cloud three-dimensional coordinate tensor characteristic map through multiple characteristic extraction, multiple characteristic map splicing, multiple upsampling and loss function input, and obtaining a point cloud map by using a 1-channel point cloud three-dimensional coordinate tensor characteristic map through a nearest neighbor classification algorithm random condition field.
Fig. 3 is a processing module diagram of step S4 of the semantic segmentation method for ground targets based on laser radar on board the helicopter in fig. 1. Referring to fig. 3, the background (feature extraction) specifically includes the following steps:
and (3) performing three times of filling convolution on the three-channel two-dimensional characteristic diagram in the step (S3), dividing the three-channel two-dimensional characteristic diagram into a characteristic diagram with the channel number 9 times of the original channel number by the convolution kernel size of (3, 3) of the current backbone input characteristic diagram, multiplying the characteristic diagram by two points, obtaining the convolution result of the channel number of the input characteristic diagram by the two-dimensional convolution of (1, 1), and splicing the convolution result with the characteristic diagram of the original backbone input characteristic diagram by the two-dimensional convolution of (3, 3) to complete the backbone.
Based on the above description, the present invention has the following advantages:
1. through a semantic segmentation algorithm based on deep learning, the ground feature separation function of laser point cloud data is realized, the data volume of subsequent processing can be reduced, the reliability of the subsequent processing such as target identification can be improved, and the accuracy and the robustness of ground feature separation can be improved compared with a common ground feature separation method based on a priori map;
2. considering the requirements of large laser radar visual field range and detection of small targets such as cables, people and the like under an airborne condition, a U-shaped structure is newly added into an original SqueezeSeg network, so that the detail information of a deep structure of the network is enriched, and the detection performance of the small targets is improved;
3. when the laser point cloud is subjected to two-dimensional coding, a conventional coding mode based on angular resolution is abandoned, a coding mode based on distance is adopted, and the point cloud is less when the distance is far, and the helicopter focuses more on the detection performance of a remote target.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. A ground object semantic segmentation method based on a helicopter airborne laser radar is characterized by comprising the following steps:
s1: acquiring real-time point cloud through a laser radar, collecting point cloud data, and performing S2;
s2: the collected point cloud data enters a down-sampling module, the point cloud characteristics are kept, meanwhile, the point cloud data amount is reduced, effective data are obtained, and the step S3 is carried out;
s3: preprocessing the effective data obtained after the downsampling to encode the three-dimensional point cloud into a two-dimensional aerial view, and entering the step S4;
s4: dividing the points into places and object points by using a deep learning network mode, detecting obstacles only in the object points, and entering the step S5;
s5: dividing the object point set into a plurality of isolated point cloud clusters by using an Euclidean clustering mode, and entering the step S6;
s6: and classifying by using the characteristics of the point cloud clusters, calculating external polyhedrons of the clustered point clouds belonging to the same target, and classifying the external polyhedrons according to the attributes of the external polyhedrons to obtain the objects to be detected.
2. The method for semantic segmentation of ground targets based on airborne lidar of a helicopter according to claim 1, wherein step S1 comprises the steps of:
s11: sending a udp packet by the laser radar;
s12: the server end receives and unpacks the packet, and eliminates the dead points to perform coordinate system conversion;
s13: and finishing the acquisition of point cloud data.
3. The helicopter airborne lidar based ground target semantic segmentation method of claim 1, wherein the downsampling is performed by voxel filtering in step S2.
4. The method for semantic segmentation of ground targets based on airborne lidar of a helicopter according to claim 1, wherein step S3 comprises the steps of:
s31: encoding the three-dimensional point cloud into a two-dimensional aerial view within the roi range;
s32: and encoding the two-dimensional feature map by using the highest height value of the point clouds falling into the same grid and the median of the height of the point clouds falling into the same grid.
5. The method for semantic segmentation of ground targets based on helicopter-borne laser radar according to claim 1, characterized in that the objects to be detected comprise buildings, shafts, windmills, high-voltage line towers, signal towers.
6. The method for semantic segmentation of ground targets based on airborne lidar of a helicopter according to claim 1, wherein said step S3 further comprises:
establishing an roi, wherein x is-128 m < x is less than or equal to 128m, z is 0< z is less than or equal to 256m in a laser radar coordinate system, converting the point cloud into a pixel coordinate system, counting by using a mode that the highest point and a median of the point cloud height falling into the same grid replace the value of the pixel according to a 0.5mX0.5m space, wherein the converted three-channel value is (x, y, z), and the converted five-channel value is (x, y, z, yaw, pitch), so that a two-dimensional characteristic diagram of five channels and one three channel is obtained and used as the input of the network.
7. The method according to claim 6, wherein yaw and pitch are two angles that can be determined in three dimensions.
8. The method for semantic segmentation of ground targets based on airborne lidar of a helicopter according to claim 6, characterized in that the five-channel two-dimensional feature map of step S3 is used as input of a network, and a feature map is obtained by a 1-by-1 convolution kernel with 32 output channels; and outputting a prediction species map by using a 32-channel point cloud three-dimensional coordinate tensor characteristic map through multiple characteristic extraction, multiple characteristic map splicing, multiple upsampling and loss function input, and obtaining a point cloud map by using a 1-channel point cloud three-dimensional coordinate tensor characteristic map through a nearest neighbor classification algorithm random condition field.
9. The helicopter airborne lidar based ground target semantic segmentation method of claim 8, wherein the backhaul specifically comprises the steps of:
and (3) performing three times of filling convolution on the three-channel two-dimensional characteristic diagram in the step (S3), dividing the three-channel two-dimensional characteristic diagram into a characteristic diagram with the channel number 9 times of the original channel number by the convolution kernel size of (3, 3) of the current backbone input characteristic diagram, multiplying the characteristic diagram by two points, obtaining the convolution result of the channel number of the input characteristic diagram by the two-dimensional convolution of (1, 1), and splicing the convolution result with the characteristic diagram of the original backbone input characteristic diagram by the two-dimensional convolution of (3, 3) to complete the backbone.
CN202111484048.0A 2021-12-07 2021-12-07 Ground object semantic segmentation method based on helicopter airborne laser radar Pending CN114140470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111484048.0A CN114140470A (en) 2021-12-07 2021-12-07 Ground object semantic segmentation method based on helicopter airborne laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111484048.0A CN114140470A (en) 2021-12-07 2021-12-07 Ground object semantic segmentation method based on helicopter airborne laser radar

Publications (1)

Publication Number Publication Date
CN114140470A true CN114140470A (en) 2022-03-04

Family

ID=80384783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111484048.0A Pending CN114140470A (en) 2021-12-07 2021-12-07 Ground object semantic segmentation method based on helicopter airborne laser radar

Country Status (1)

Country Link
CN (1) CN114140470A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115356740A (en) * 2022-08-09 2022-11-18 群周科技(上海)有限公司 Landing positioning method for landing area in airborne environment
WO2023193400A1 (en) * 2022-04-06 2023-10-12 合众新能源汽车股份有限公司 Point cloud detection and segmentation method and apparatus, and electronic device
CN117292140A (en) * 2023-10-17 2023-12-26 小米汽车科技有限公司 Point cloud data processing method and device, vehicle and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507982A (en) * 2019-06-28 2020-08-07 浙江大学 Point cloud semantic segmentation method based on deep learning
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN112200248A (en) * 2020-10-13 2021-01-08 北京理工大学 Point cloud semantic segmentation method, system and storage medium under urban road environment based on DBSCAN clustering
WO2021004813A1 (en) * 2019-07-08 2021-01-14 Continental Automotive Gmbh Method and mobile entity for detecting feature points in an image
US10929694B1 (en) * 2020-01-22 2021-02-23 Tsinghua University Lane detection method and system based on vision and lidar multi-level fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507982A (en) * 2019-06-28 2020-08-07 浙江大学 Point cloud semantic segmentation method based on deep learning
WO2021004813A1 (en) * 2019-07-08 2021-01-14 Continental Automotive Gmbh Method and mobile entity for detecting feature points in an image
US10929694B1 (en) * 2020-01-22 2021-02-23 Tsinghua University Lane detection method and system based on vision and lidar multi-level fusion
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN112200248A (en) * 2020-10-13 2021-01-08 北京理工大学 Point cloud semantic segmentation method, system and storage medium under urban road environment based on DBSCAN clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHENFENG XU, BICHEN WU, ZINING WANG, WEI ZHAN, PETER VAJDA, KURT KEUTZER, AND MASAYOSHI TOMIZUKA: "SqueezeSegV3: Spatially-Adaptive Convolution for Efficient Point-Cloud Segmentation", COMPUTER VISION -- ECCV 2020 : PART XXVIII, 13 April 2021 (2021-04-13), pages 1 - 19 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023193400A1 (en) * 2022-04-06 2023-10-12 合众新能源汽车股份有限公司 Point cloud detection and segmentation method and apparatus, and electronic device
CN115356740A (en) * 2022-08-09 2022-11-18 群周科技(上海)有限公司 Landing positioning method for landing area in airborne environment
CN117292140A (en) * 2023-10-17 2023-12-26 小米汽车科技有限公司 Point cloud data processing method and device, vehicle and storage medium
CN117292140B (en) * 2023-10-17 2024-04-02 小米汽车科技有限公司 Point cloud data processing method and device, vehicle and storage medium

Similar Documents

Publication Publication Date Title
CN111832655B (en) Multi-scale three-dimensional target detection method based on characteristic pyramid network
CN114140470A (en) Ground object semantic segmentation method based on helicopter airborne laser radar
Chen et al. Distribution line pole detection and counting based on YOLO using UAV inspection line video
CN113902897B (en) Training of target detection model, target detection method, device, equipment and medium
CN113610044B (en) 4D millimeter wave three-dimensional target detection method and system based on self-attention mechanism
CN106094569A (en) Multi-sensor Fusion unmanned plane perception with evade analogue system and emulation mode thereof
CN111126184B (en) Post-earthquake building damage detection method based on unmanned aerial vehicle video
CN111008975B (en) Mixed pixel unmixing method and system for space artificial target linear model
CN107479065B (en) Forest gap three-dimensional structure measuring method based on laser radar
CN114140471A (en) Ground target semantic segmentation method based on helicopter airborne laser radar
CN112288667B (en) Three-dimensional target detection method based on fusion of laser radar and camera
CN110717496B (en) Complex scene tree detection method based on neural network
Awrangjeb et al. Rule-based segmentation of LIDAR point cloud for automatic extraction of building roof planes
CN113269147B (en) Three-dimensional detection method and system based on space and shape, and storage and processing device
CN117274749B (en) Fused 3D target detection method based on 4D millimeter wave radar and image
CN112630160A (en) Unmanned aerial vehicle track planning soil humidity monitoring method and system based on image acquisition and readable storage medium
CN114038193A (en) Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking
CN117808689A (en) Depth complement method based on fusion of millimeter wave radar and camera
CN111458691B (en) Building information extraction method and device and computer equipment
CN114359754B (en) Unmanned aerial vehicle power inspection laser point cloud real-time power transmission wire extraction method
CN111337932A (en) Power grid infrastructure construction checking method based on airborne laser radar system
Li et al. Vehicle object detection based on rgb-camera and radar sensor fusion
CN111812670B (en) Single photon laser radar space transformation noise judgment and filtering method and device
CN112613437B (en) Method for identifying illegal buildings
WO2021179583A1 (en) Detection method and detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination