CN112731436B - Multi-mode data fusion travelable region detection method based on point cloud up-sampling - Google Patents

Multi-mode data fusion travelable region detection method based on point cloud up-sampling Download PDF

Info

Publication number
CN112731436B
CN112731436B CN202011501003.5A CN202011501003A CN112731436B CN 112731436 B CN112731436 B CN 112731436B CN 202011501003 A CN202011501003 A CN 202011501003A CN 112731436 B CN112731436 B CN 112731436B
Authority
CN
China
Prior art keywords
point cloud
image
pixel
detection
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011501003.5A
Other languages
Chinese (zh)
Other versions
CN112731436A (en
Inventor
金晓
沈会良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202011501003.5A priority Critical patent/CN112731436B/en
Publication of CN112731436A publication Critical patent/CN112731436A/en
Application granted granted Critical
Publication of CN112731436B publication Critical patent/CN112731436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/006Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-mode data fusion travelable region detection method based on point cloud up-sampling, which mainly comprises two parts of space point cloud self-adaptive up-sampling and multi-mode data fusion travelable region detection. Registering a camera and a laser radar through a joint calibration algorithm, projecting a point cloud to an image plane to obtain a sparse point cloud image, calculating edge intensity information by using a pixel local window, and adaptively selecting a point cloud up-sampling scheme to obtain a dense point cloud image; and carrying out feature extraction and cross fusion on the obtained dense point cloud image and the RGB image to realize quick detection of the drivable area. The detection method can realize rapid and accurate detection and segmentation of the drivable area.

Description

Multi-mode data fusion travelable region detection method based on point cloud up-sampling
Technical Field
The invention relates to a multi-mode data fusion drivable region detection method based on point cloud up-sampling, which mainly comprises two parts of adaptive up-sampling of space point cloud and multi-mode data fusion drivable region detection.
Background
Depending on the type of sensor selected, two main approaches to the current detection algorithm for the travelable region are mainly camera-based and lidar-based. The camera has the advantages of low cost, high frame rate, high resolution and the like, but is easily interfered by factors such as weather and the like, and has low robustness. On the other hand, the laser radar mainly acquires data by taking the three-dimensional point cloud, and has high three-dimensional measurement precision and strong anti-interference capability although the resolution and the cost are insufficient, so that the laser radar is widely applied to unmanned systems. For sparsity of point cloud, some existing methods adopt a mode of up-sampling by combining bilateral filtering, for example, weight estimation is carried out in a local window, so that dense space information is obtained, but most of the existing methods have the problems of relatively fuzzy edge recovery, insufficient detail retention degree and the like.
With the continuous improvement of the accuracy requirement of the detection algorithm of the drivable area, the detection of the drivable area by using a single sensor can realize more reliable detection in partial scenes, but has certain limitations. In order to obtain a better detection effect, fusion methods based on images and point clouds are also continuously emerging.
Zhang Y et al in the literature [ Fusion of LiDAR and Camera by Scanning in LiDAR Imagery and Image-Guided Diffusion for Urban Road Detection, "[ J ].2018:579-584 ] propose a conventional camera and lidar fusion method. The method is characterized in that the discrete point cloud of the drivable area is determined by utilizing the line and column scanning ideas on the basis of preliminary screening of the point cloud, and the image is used as a guide to realize pixel-level segmentation of the road area. The method has the defect that the image information is not fully utilized in the detection process, and is not suitable for some road scenes with poor structuring degree.
Disclosure of Invention
In order to overcome the defects, the technical problem to be solved by the invention is to provide a spatial point cloud up-sampling method based on edge intensity self-adaption, which enhances the reservation of edge and detail information.
Accordingly, another object of the present invention is to provide a frame for detecting a traveling area that can sufficiently fuse point clouds and image characteristics.
For detecting the drivable area of the intelligent vehicle, the method for solving the technical problems mainly comprises the following steps: and completing self-adaptive up-sampling of sparse point cloud based on pixel edge intensity, then taking the synchronized RGB image and dense point cloud image as input, performing feature extraction and fusion, and outputting a detection result.
The invention is realized by the following technical scheme:
the method comprises the steps of calibrating a camera and a laser radar through a joint calibration algorithm, projecting the point cloud to an image plane to obtain a sparse point cloud image, calculating edge intensity information by utilizing a pixel local window, and adaptively selecting a point cloud up-sampling scheme to obtain a dense point cloud image; and carrying out feature extraction and cross fusion on the obtained dense point cloud image and the RGB image to realize quick detection of the drivable area.
In the above technical solution, further, edge intensity information can be calculated by using a local window of the pixel on the basis of the sparse point cloud image, so that the pixel is divided into two types of non-edge areas and edge areas, and adaptive up-sampling is completed accordingly. Calculating edge intensity information by using a pixel local window, specifically: for each pixel, calculating edge intensity information by using a point cloud distance in a pixel local window according to the following formula, wherein when the edge intensity information is larger than a specified threshold tau, the pixel is considered to be in an edge region, otherwise, the pixel is considered to be in a non-edge region:
wherein sigma represents the standard deviation calculation,represents the average distance of the point cloud within the window, λ is a fixed parameter. The pixel local window refers to a neighborhood window taking the pixel as a center, and the edge intensity information is used for representing the possibility that the pixel is at the edge.
Furthermore, the adaptive selection point cloud upsampling scheme specifically includes: for the pixels in the non-edge area, the calculation can be well completed by only using a spatial Gaussian kernel in a neighborhood window; for the edge pixels, the edge is restored to tend to be fuzzy only by means of the space position, so that color information is introduced, initial weights of all points are calculated firstly based on color and space position Gaussian kernels, all points in a local window are divided into foreground points and background points according to the average depth of point clouds on the basis, the number and the weight sum of the two types of points are counted, the weight of each point is adjusted according to the number and the weight sum, and finally the space position information estimation of the pixel to be calculated is completed; the foreground points are points smaller than the average depth information, and the background points are points larger than or equal to the average depth information.
As another improvement of the present invention, feature extraction and cross fusion are performed on the obtained dense point cloud image and RGB image, specifically: and taking the synchronized dense point cloud images and RGB images as input, and carrying out feature extraction and cross fusion through a multi-layer convolution network, wherein the multi-layer convolution network is combined with a cavity convolution and pyramid pooling module at the same time, so that the receptive field can be rapidly increased, and multi-scale context information can be aggregated. The loss function focusing on the detection results of the difficult-to-detect area and the non-road area is adopted, so that the detection accuracy is improved, and meanwhile, the safety of vehicle running is ensured.
The beneficial effects of the invention are as follows:
compared with the traditional combined bilateral filtering up-sampling algorithm, the self-adaptive point cloud up-sampling method based on the edge intensity can restore the detail information of the scene more reliably, and improves the accuracy; meanwhile, the RGB image and dense point cloud image fusion method adopted by the invention can effectively fuse the characteristics of multi-mode data, integrate the advantages of two sensors and realize quick and accurate detection and segmentation of a drivable area. The multi-layer convolution network can realize rapid growth of receptive fields and multi-scale aggregation information, meanwhile, the invention also adopts a loss function focusing on difficult-to-detect areas and non-road areas, can accurately and reliably output detection results of the drivable areas, and realizes rapid detection and segmentation of the road areas.
Drawings
The following describes the embodiments of the present invention in further detail with reference to the drawings.
FIG. 1 is a flow chart of a multi-modal data fusion travelable region detection method based on point cloud upsampling;
FIG. 2 (a) is a sparse point cloud image, (b) is an edge region representation of the scene;
FIG. 3 (a) is a joint bilateral filtering upsampling result and (b) is a method upsampling result of the present invention;
fig. 4 is a comparison of the detection results of the travelable regions of three networks, and the corresponding scene graph Image and result truth graph Label, respectively: the method comprises the steps of inputting a detection network RGB of an image only, inputting a detection network Lidar of dense point cloud only, and fusing the multi-mode data of the invention into a detection network Fusion;
fig. 5 is a block diagram of a multi-layer convolutional network of the present invention.
Detailed Description
As shown in fig. 1, the method for detecting the multi-mode data fusion drivable area based on point cloud up-sampling provided by the invention has the following specific embodiments:
1. the camera internal parameter calibration and the camera and laser radar external parameter combined calibration are specifically shown as follows.
1.1, fixing the positions of a camera and a laser radar, and synchronously acquiring point cloud and image data based on a hard trigger mechanism;
1.2 obtaining the internal reference information of the camera according to monocular calibration, and simultaneously obtaining plane equations of the calibration plate under the coordinate system of each frame of camera and laser radar, which are respectively marked as a c,i And a l,i Where i denotes the number of frames, c denotes the camera coordinate system, and l denotes the lidar coordinate system. The normal vector of the plane of the calibration plate is represented by theta, X represents a space point on the plane, d represents the distance from the origin of the coordinate system to the plane, and the plane constraint exists as follows
a c,i :θ c,i X+d c,i =0
a l,i :θ l,i X+d l,i =0
1.3 constructing the following optimization equation, solving a rotation matrix R and a translation vector t, wherein L represents the number of points on each frame plane, and num is the total frame number.
2. And according to the joint calibration result, projecting the laser point cloud to an image plane, and obtaining an initial sparse point cloud image. For each pixel, the edge intensity information T is calculated using the point cloud distance within the local window as follows, and when the edge intensity is greater than a specified threshold τ (which may be chosen as desired, e.g., 1.1), the pixel is considered to be in the edge region.
Where σ represents the standard deviation calculation,the average distance of the point cloud within the window is represented, lambda being a fixed parameter, here a value of 3. Fig. 2 (a) and (b) are respectively a sparse point cloud image and an edge image representation corresponding to the sparse point cloud image.
3. According to the edge intensity information, dividing each pixel in the image into two types of non-edge areas and edge areas, and accordingly completing up-sampling of corresponding point clouds to realize densification of sparse point clouds and obtain a dense point cloud image.
3.1 if the pixel q is in a non-edge region, directly calculating a weighted result by using a space Gaussian kernel in a neighborhood N (q) of the pixel q, and avoiding unsmooth point cloud reconstruction caused by overlarge color difference.
3.2 if q is in the edge region, to avoid over-blurring of edge restoration, processing is performed with reference to a joint bilateral filtering upsampling method, first, initial weights g (p) are given to each point by using similarity of color and spatial position, s represents summation calculation, the purpose is to balance differences of space and color, I represents pixel values of RGB images, specifically as follows
On the basis, considering the spatial distribution correlation of the point cloud in the local window, classifying the point cloud into two categories of foreground points and background points according to depth information, and marking the foreground points as points smaller than average depth information, the background points as points greater than or equal to average depth information, c represents the category F or B of the neighborhood point cloud, m and n represent the sum of the quantity and the weight of the two categories, and t q Representing the edge intensity of the current pixel, calculating weight adjustment factors of each point by category, and the whole is as follows
m c =|c|,
Calculating spatial position information corresponding to the current pixel according to the calculated weight, such as
In the present step, the step of the method,representing spatial position information of a pixel to be calculated, d p Representing known spatial points within the neighborhood, K represents a normalization factor, σ r Sum sigma I Representing the standard deviation of the spatial domain and the color domain, respectively.
4. And (3) simultaneously inputting the RGB and the dense point cloud image obtained in the step (3) as 2 three-channel data, and constructing a multi-mode data fusion travelable area detection network (namely a multi-layer convolution network). As shown in fig. 5, the multi-layer convolution network adopts a double encoder (the double encoder has the same structure but does not share parameters) and a single decoder structure, and an RGB image and a dense point cloud image are respectively used as original inputs, and two feature images of the same layer are subjected to cross fusion through 1×1 convolution, so that the result is used as the input of the next layer convolution network; inputting an output result obtained by the encoder as a pyramid pooling module to obtain a final feature map output; and the pyramid pooling module outputs a result, restores the resolution through a decoder, calculates the probability that each pixel belongs to the drivable region by using a Sigmoid function, and judges that the pixel belongs to the drivable region when the probability is larger than a set threshold. The multi-layer convolution network combines the cavity convolution and pyramid pooling modules, can rapidly increase receptive fields and aggregate multi-scale context information.
In the supervised learning process, the design loss function is shown as follows, the detection results of the difficult-to-detect areas and the non-road areas are focused, the detection accuracy is improved, and meanwhile the running safety of the vehicle is ensured.
Wherein y=1 and y=0 respectively represent positive and negative samples, the positive sample is a road area, the negative sample is a non-road area, the difficult-to-detect area refers to an area with difficult detection, and the detection result of the positive sample tends to be non-road; for negative samples, the detection results tend to be on the road. y' represents the probability of detection, and α and γ are fixed constants, where each takes a value of 2.
The resolution of the feature map is restored through the decoder, the probability that each pixel belongs to a road is calculated by using the Sigmoid layer, and when the probability is larger than a set threshold (such as 0.5), the pixel is judged to belong to a drivable area.
Example 1
The embodiment mainly compares the performance index of the joint bilateral filtering up-sampling algorithm JBU with that of the self-adaptive up-sampling method based on the edge strength information in the invention. In the embodiment, the sparse point cloud image is obtained by downsampling the depth truth image 5 times, and the upsampling effect of the two methods is compared. Fig. 3 (a) and (b) show the JBU upsampling result and the inventive method upsampling result, respectively. It has been found that the method of the invention can better prevent edge blurring while reducing reconstruction errors.
Example 2
The detection performance of the drivable region of the multi-mode data fusion network in the embodiment is mainly compared with that of a single image data network, a single point cloud data network and a single point cloud data network through a KITTI data set, three network detection results are shown in fig. 4, and it can be intuitively seen that the multi-mode data fusion drivable region detection method in the embodiment can further improve the accuracy of road detection, avoid false detection of vehicles to a great extent and improve the reliability of boundary detection.
The foregoing detailed description of the embodiments and the advantages of the invention will be appreciated that the foregoing description is merely exemplary of the preferred embodiments of the invention, and that no changes, additions, substitutions and equivalents made herein without departing from the scope of the invention.

Claims (3)

1. The multi-mode data fusion travelable region detection method based on point cloud up-sampling is characterized in that the method calibrates a camera and a laser radar through a joint calibration algorithm, projects the point cloud to an image plane to obtain a sparse point cloud image, calculates edge intensity information by utilizing a pixel local window, and adaptively selects a point cloud up-sampling scheme to obtain a dense point cloud image; feature extraction and cross fusion are carried out on the obtained dense point cloud images and RGB images, so that the rapid detection of the drivable area is realized;
the edge intensity information is calculated by using the pixel local window, and specifically comprises the following steps: for each pixel, calculating edge intensity information by using a point cloud distance in a pixel local window according to the following formula, wherein when the edge intensity information is larger than a specified threshold tau, the pixel is considered to be in an edge region, otherwise, the pixel is considered to be in a non-edge region:
wherein sigma represents the standard deviation calculation,representing the average distance of point clouds in a window, wherein lambda is a fixed parameter;
the self-adaptive selection point cloud up-sampling scheme specifically comprises the following steps: for the pixels in the non-edge area, directly calculating a weighted result by using a space Gaussian kernel in a local window of the pixels; for the pixels of the edge area, firstly, the weights of all points in a local window are calculated by using the space and the color Gaussian kernel singly; secondly, dividing the point cloud into two types of foreground points and background points according to the average depth of the point cloud in the window, counting the number, the weight and the sum of the two types of points in the local window, and adjusting the weight of each point according to the number and the weight; finally, each point is weighted in the local window by using the updated weight, so that the spatial position information calculation of the pixel to be calculated is completed.
2. The method for detecting the multi-mode data fusion drivable region based on point cloud up-sampling according to claim 1, wherein feature extraction and cross fusion are performed on the obtained dense point cloud image and the RGB image, specifically: taking the RGB image and the dense point cloud image as input, carrying out feature extraction and cross fusion by using a multi-layer convolution network, focusing on detection results of difficult-to-detect areas and non-road areas by a loss function, and outputting detection probability of a drivable area;
the loss function is as follows:
wherein y=1 and y=0 represent positive and negative samples respectively, the positive sample is a road area, and the negative sample is a non-road area; the difficult-to-detect area refers to an area with more difficult detection, and for a positive sample, the detection result tends to be off-road; for negative samples, the detection results tend to be on the road; y' represents the probability of judging the road area, and α and γ are fixed constants.
3. The multi-mode data fusion travelable region detection method based on point cloud up-sampling as claimed in claim 2, characterized in that the multi-layer convolution network structure adopts a double encoder and a single decoder structure, the two encoders have the same structure but do not share parameters, an RGB image and a dense point cloud image are respectively used as original inputs, two feature images output by the same layer encoder are subjected to cross fusion by using 1 x 1 convolution, a fusion result is used as an input of the next layer convolution, and a downsampled feature image obtained by the double encoder is input into a pyramid pooling module to obtain a final feature image output;
and the pyramid pooling module outputs a result, restores the resolution through a decoder, calculates the probability that each pixel belongs to the drivable region by using a Sigmoid function, and judges that the pixel belongs to the drivable region when the probability is larger than a set threshold.
CN202011501003.5A 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling Active CN112731436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011501003.5A CN112731436B (en) 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011501003.5A CN112731436B (en) 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling

Publications (2)

Publication Number Publication Date
CN112731436A CN112731436A (en) 2021-04-30
CN112731436B true CN112731436B (en) 2024-03-19

Family

ID=75603282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011501003.5A Active CN112731436B (en) 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling

Country Status (1)

Country Link
CN (1) CN112731436B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569803A (en) * 2021-08-12 2021-10-29 中国矿业大学(北京) Multi-mode data fusion lane target detection method and system based on multi-scale convolution
CN113850832A (en) * 2021-09-01 2021-12-28 的卢技术有限公司 Drivable region segmentation method
CN113945947B (en) * 2021-10-08 2024-08-06 南京理工大学 Method for detecting passable area of multi-line laser radar point cloud data
CN114677315B (en) 2022-04-11 2022-11-29 探维科技(北京)有限公司 Image fusion method, device, equipment and medium based on image and laser point cloud
CN116416586B (en) * 2022-12-19 2024-04-02 香港中文大学(深圳) Map element sensing method, terminal and storage medium based on RGB point cloud
CN116343159B (en) * 2023-05-24 2023-08-01 之江实验室 Unstructured scene passable region detection method, device and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012248004A (en) * 2011-05-27 2012-12-13 Toshiba Corp Image processing system, image recognition device and method
WO2015010451A1 (en) * 2013-07-22 2015-01-29 浙江大学 Method for road detection from one image
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data
CN110705342A (en) * 2019-08-20 2020-01-17 上海阅面网络科技有限公司 Lane line segmentation detection method and device
CN110827397A (en) * 2019-11-01 2020-02-21 浙江大学 Texture fusion method for real-time three-dimensional reconstruction of RGB-D camera
CN111274976A (en) * 2020-01-22 2020-06-12 清华大学 Lane detection method and system based on multi-level fusion of vision and laser radar
CN111986164A (en) * 2020-07-31 2020-11-24 河海大学 Road crack detection method based on multi-source Unet + Attention network migration
CN112069870A (en) * 2020-07-14 2020-12-11 广州杰赛科技股份有限公司 Image processing method and device suitable for vehicle identification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7983446B2 (en) * 2003-07-18 2011-07-19 Lockheed Martin Corporation Method and apparatus for automatic object identification

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012248004A (en) * 2011-05-27 2012-12-13 Toshiba Corp Image processing system, image recognition device and method
WO2015010451A1 (en) * 2013-07-22 2015-01-29 浙江大学 Method for road detection from one image
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model
CN110705342A (en) * 2019-08-20 2020-01-17 上海阅面网络科技有限公司 Lane line segmentation detection method and device
CN110827397A (en) * 2019-11-01 2020-02-21 浙江大学 Texture fusion method for real-time three-dimensional reconstruction of RGB-D camera
CN111274976A (en) * 2020-01-22 2020-06-12 清华大学 Lane detection method and system based on multi-level fusion of vision and laser radar
CN112069870A (en) * 2020-07-14 2020-12-11 广州杰赛科技股份有限公司 Image processing method and device suitable for vehicle identification
CN111986164A (en) * 2020-07-31 2020-11-24 河海大学 Road crack detection method based on multi-source Unet + Attention network migration

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
A least-square-based approach to improve the accuracy of laser ranging;J. Xiao;《2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI), Beijing, China, 2014》;全文 *
Feature enhancing aerial lidar point cloud refinement;Zhenzhen Gao;《PROCEEDINGS OF SPIE》;全文 *
LIDAR–camera fusion for road detection using fully convolutional neural networks;Luca Caltagirone;《Robotics and Autonomous Systems》;全文 *
基于深度学习的高分辨率遥感图像建筑物识别;宋廷强;李继旭;张信耶;;计算机工程与应用;20200831(第08期);全文 *
基于激光雷达点云与图像融合的车辆目标检测方法;胡远志;刘俊生;何佳;肖航;宋佳;;汽车安全与节能学报(第04期);全文 *
基于点云中心的激光雷达与相机联合标定方法研究;康国华;张琪;张晗;徐伟证;张文豪;;仪器仪表学报(第12期);全文 *
基于统计测试的道路图象边界提取方法;唐国维, 王东, 刘显德, 李永树, 何明革;大庆石油学院学报(第03期);全文 *
基于融合分层条件随机场的道路分割模型;杨飞;《机器人》;全文 *
服务机器人的路面识别研究;邵帅;《工业控制计算机》;全文 *
融合距离与彩色信息的草丛中障碍物检测;项志宇;王伟;;光电工程(第03期);全文 *

Also Published As

Publication number Publication date
CN112731436A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
CN111274976B (en) Lane detection method and system based on multi-level fusion of vision and laser radar
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN111563415B (en) Binocular vision-based three-dimensional target detection system and method
CN108416292B (en) Unmanned aerial vehicle aerial image road extraction method based on deep learning
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN117094914B (en) Smart city road monitoring system based on computer vision
CN112505684A (en) Vehicle multi-target tracking method based on radar vision fusion under road side view angle in severe environment
CN112215306A (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN108830131B (en) Deep learning-based traffic target detection and ranging method
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
CN112613392A (en) Lane line detection method, device and system based on semantic segmentation and storage medium
CN116486368A (en) Multi-mode fusion three-dimensional target robust detection method based on automatic driving scene
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN117111055A (en) Vehicle state sensing method based on thunder fusion
CN118155149B (en) Intelligent monitoring system for smart city roads
CN115880658A (en) Automobile lane departure early warning method and system under night scene
CN114814827A (en) Pedestrian classification method and system based on 4D millimeter wave radar and vision fusion
CN117789146A (en) Visual detection method for vehicle road running under automatic driving scene
CN110992304B (en) Two-dimensional image depth measurement method and application thereof in vehicle safety monitoring
CN112052768A (en) Urban illegal parking detection method and device based on unmanned aerial vehicle and storage medium
CN115082897A (en) Monocular vision 3D vehicle target real-time detection method for improving SMOKE
CN114565764A (en) Port panorama sensing system based on ship instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant