CN112731436A - Multi-mode data fusion travelable area detection method based on point cloud up-sampling - Google Patents

Multi-mode data fusion travelable area detection method based on point cloud up-sampling Download PDF

Info

Publication number
CN112731436A
CN112731436A CN202011501003.5A CN202011501003A CN112731436A CN 112731436 A CN112731436 A CN 112731436A CN 202011501003 A CN202011501003 A CN 202011501003A CN 112731436 A CN112731436 A CN 112731436A
Authority
CN
China
Prior art keywords
point cloud
area
detection
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011501003.5A
Other languages
Chinese (zh)
Other versions
CN112731436B (en
Inventor
金晓
沈会良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202011501003.5A priority Critical patent/CN112731436B/en
Publication of CN112731436A publication Critical patent/CN112731436A/en
Application granted granted Critical
Publication of CN112731436B publication Critical patent/CN112731436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/006Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud upsampling-based multi-mode data fusion travelable area detection method, which mainly comprises two parts of space point cloud adaptive upsampling and multi-mode data fusion travelable area detection. Registering a camera and a laser radar through a joint calibration algorithm, projecting point clouds to an image plane to obtain a sparse point cloud image, calculating edge intensity information by using a pixel local window, and adaptively selecting a point cloud up-sampling scheme to obtain a dense point cloud image; and performing feature extraction and cross fusion on the obtained dense point cloud picture and the RGB image to realize rapid detection of the travelable area. The detection method can realize the rapid and accurate detection and segmentation of the travelable area.

Description

Multi-mode data fusion travelable area detection method based on point cloud up-sampling
Technical Field
The invention relates to a point cloud upsampling-based multi-mode data fusion travelable area detection method, which mainly comprises two parts of self-adaptive upsampling of spatial point cloud and multi-mode data fusion travelable area detection.
Background
Depending on the type of sensor selected, there are two main solutions of current algorithms for detecting travelable areas, camera-based and lidar-based. The camera has the advantages of low cost, high frame rate, high resolution and the like, but is easily interfered by factors such as weather and the like, and the robustness degree is low. On the other hand, the laser radar mainly acquires data by using three-dimensional point cloud, and although the data is insufficient in resolution and cost, the laser radar has high three-dimensional measurement accuracy and strong anti-interference capability, so that the laser radar is generally applied to an unmanned system. For the sparsity of point cloud, some existing methods adopt a mode such as combined bilateral filtering upsampling to perform weighted estimation in a local window so as to obtain compact spatial information, but most of the methods have the problems of fuzzy edge recovery, insufficient detail retention degree and the like.
With the continuous improvement of the accuracy requirement of the travelable region detection algorithm, although reliable detection can be realized in partial scenes by using a single sensor for travelable region detection, certain limitations still exist. In order to obtain better detection effect, a fusion method based on image and point cloud is also continuously appeared.
Zhang Y et al propose a conventional Camera and LiDAR Fusion method in the literature [ Fusion of LiDAR and Camera by Scanning in LiDAR Image and Image-Guided Diffusion for an Urban Road Detection, "[ J ].2018:579-584 ]. The method determines the discrete point cloud of the travelable area by using the line and column scanning idea on the basis of primarily screening the point cloud, and realizes the pixel-level segmentation of the road area by taking an image as guidance. The method has the defects that the image information is not fully utilized in the detection process, and the method is not suitable for some road scenes with poor structuralization degree.
Disclosure of Invention
In order to overcome the above defects, the technical problem to be solved by the present invention is to provide an edge intensity adaptive spatial point cloud upsampling method to enhance the retention of edge and detail information.
Accordingly, another object of the present invention is to provide a travelable region detection framework that can fully integrate the point cloud and image characteristics.
For the detection of the driving area of the intelligent vehicle, the invention mainly comprises the following steps: and completing self-adaptive up-sampling of the sparse point cloud based on the pixel edge intensity, then taking the synchronized RGB image and the dense point cloud image as input, performing feature extraction and fusion, and outputting a detection result.
The invention is realized by adopting the following technical scheme:
a multi-mode data fusion travelable area detection method based on point cloud upsampling comprises the steps of calibrating a camera and a laser radar through a joint calibration algorithm, projecting point cloud to an image plane to obtain a sparse point cloud picture, calculating edge intensity information by utilizing a pixel local window, and adaptively selecting a point cloud upsampling scheme to obtain a dense point cloud picture; and performing feature extraction and cross fusion on the obtained dense point cloud picture and the RGB image to realize rapid detection of the travelable area.
In the above technical solution, further, the edge strength information may be calculated by using a pixel local window on the basis of the sparse point cloud image, so as to divide the pixels into a non-edge region and an edge region, and thereby completing adaptive upsampling. Calculating edge intensity information by using a pixel local window, specifically: for each pixel, calculating edge intensity information according to the following formula by using the point cloud distance in the pixel local window, and when the edge intensity information is greater than a specified threshold value tau, considering the pixel to be in an edge area, otherwise, considering the pixel to be in a non-edge area:
Figure BDA0002843597470000021
where, σ denotes the standard deviation calculation,
Figure BDA0002843597470000022
represents the average distance of the point clouds in the window, and lambda is a fixed parameter. The pixel local window is a neighborhood window taking the pixel as the center, and the edge intensity information is used for representing that the pixel is positioned at the edgeThe possibility of edges.
Furthermore, the adaptive selection point cloud up-sampling scheme specifically comprises: for the pixels in the non-edge area, the calculation can be well completed only by utilizing a spatial Gaussian kernel in a neighborhood window; for the edge pixels, only depending on the spatial position can lead the edge recovery to be fuzzy, so color information is introduced, the initial weight of each point is calculated based on the color and the Gaussian kernel of the spatial position, each point in a local window is divided into a foreground point and a background point according to the average depth of the point cloud on the basis, the number and the sum of the weights of the two points are counted, the weight of each point is adjusted according to the number and the sum of the weights, and the estimation of the spatial position information of the pixel to be calculated is finally completed; the foreground points are points smaller than the average depth information, and the background points are points larger than or equal to the average depth information.
As another improvement of the invention, the feature extraction and cross fusion are carried out on the obtained dense point cloud picture and the RGB image, and the method specifically comprises the following steps: and taking the synchronized dense point cloud image and the RGB image as input, performing feature extraction and cross fusion through a multilayer convolution network, and combining the multilayer convolution network with a cavity convolution and pyramid pooling module simultaneously, so that the receptive field can be rapidly increased and multi-scale context information can be aggregated. And a loss function focusing on the detection results of the area difficult to detect and the non-road area is adopted, so that the detection accuracy is improved, and the driving safety of the vehicle is ensured.
The invention has the beneficial effects that:
compared with the traditional combined bilateral filtering upsampling algorithm, the adaptive point cloud upsampling method based on the edge intensity can more reliably recover the detail information of the scene, and the accuracy is improved; meanwhile, the RGB image and dense point cloud image fusion method adopted by the invention can effectively fuse the characteristics of multi-mode data, integrates the advantages of two sensors and realizes the rapid and accurate detection and segmentation of the travelable area. The multilayer convolution network can realize rapid increase of receptive fields and multi-scale aggregation information, and meanwhile, the invention also adopts a loss function which focuses on regions difficult to detect and non-road regions, can accurately and reliably output the detection result of the drivable region, and realizes rapid detection and segmentation of the road region.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for multi-modal data fusion travelable region detection based on point cloud upsampling;
FIG. 2(a) is a sparse point cloud image and (b) is a representation of the corresponding edge region of the scene;
FIG. 3(a) is the result of upsampling by joint bilateral filtering, (b) is the result of upsampling by the method of the present invention;
fig. 4 is a comparison of the detection results of the travelable areas of the three networks, and the corresponding scene graph Image and the result truth graph Label, where the three networks are: only inputting the detection network RGB of an image, only inputting the detection network Lidar of dense point cloud and the multi-mode data Fusion detection network Fusion of the invention;
fig. 5 is a diagram of a multi-layer convolutional network structure of the present invention.
Detailed Description
As shown in fig. 1, the specific implementation of the method for detecting a multi-modal data fusion travelable area based on point cloud upsampling provided by the present invention is as follows:
1. the camera internal reference calibration and the camera and laser radar external reference combined calibration are specifically shown as follows.
1.1 fixing the positions of a camera and a laser radar, and synchronously acquiring point cloud and image data based on a hard trigger mechanism;
1.2 acquiring camera internal reference information according to monocular calibration, and simultaneously acquiring plane equations of a calibration plate under each frame of camera and laser radar coordinate system, and respectively recording the plane equations as ac,iAnd al,iWhere i denotes the frame number, c denotes the camera coordinate system, and l denotes the lidar coordinate system. Using theta to represent a normal vector of a plane of the calibration plate, X to represent a space point on the plane, d to represent a distance from an origin of a coordinate system to the plane, and having a plane constraint of
ac,i:θc,iX+dc,i=0
al,i:θl,iX+dl,i=0
1.3 the following optimization equation is constructed, solving the rotation matrix R and the translational vector t, wherein L represents the number of points on each frame plane, and num is the total frame number.
Figure BDA0002843597470000051
2. And projecting the laser point cloud to an image plane according to the joint calibration result to obtain an initial sparse point cloud picture. For each pixel, using the point cloud distance in the local window, the edge intensity information T is calculated according to the following formula, and when the edge intensity is greater than a specified threshold τ (the threshold τ may be selected as needed, for example, 1.1), the pixel is considered to be in the edge region.
Figure BDA0002843597470000052
Where sigma denotes the calculation of the standard deviation,
Figure BDA0002843597470000053
the average distance of the point clouds in the window is shown, λ is a fixed parameter, and the value here is 3. Fig. 2(a) and (b) are respectively a sparse point cloud chart and an edge map corresponding thereto.
3. And according to the edge intensity information, dividing each pixel in the image into a non-edge area and an edge area, and accordingly completing corresponding point cloud up-sampling, realizing the densification of sparse point cloud, and obtaining a dense point cloud picture.
3.1 if the pixel q is in a non-edge area, directly utilizing a spatial Gaussian kernel to calculate a weighting result in the neighborhood N (q), and avoiding unsmooth point cloud reconstruction caused by overlarge color difference.
Figure BDA0002843597470000054
3.2 if q is in the edge area, in order to avoid the edge recovery from being over-fuzzy, processing is carried out by referring to a combined bilateral filtering upsampling method, firstly, initial weights g (p) are given to each point by utilizing the similarity of colors and space positions, s represents summation calculation for balancing the difference of the space and the color, and I represents the pixel value of the RGB image, such as
Figure BDA0002843597470000055
On the basis, the spatial distribution correlation of the point cloud in the local window is considered, the point cloud is divided into a foreground point and a background point according to depth information, and the foreground point and the background point are respectively marked as F and B, wherein the foreground point is a point smaller than the average depth information, the background point is a point larger than or equal to the average depth information, c represents the category F or B of the neighborhood point cloud, m and n represent the number and the weight sum of the two categories, t is the sum of the two categoriesqRepresenting the edge intensity of the current pixel, calculating a weight adjustment factor for each point by category, e.g. whole
mc=|c|,
Figure BDA0002843597470000061
Figure BDA0002843597470000062
Calculating spatial position information such as
Figure BDA0002843597470000063
In the present step, the first step is carried out,
Figure BDA0002843597470000064
representing spatial position information of the pixel to be calculated, dpRepresenting a known spatial point in the neighborhood, K representing a normalization factor, σrAnd σIThe standard deviation of the spatial domain and the color domain are indicated separately.
4. And (3) simultaneously inputting the RGB and the dense point cloud picture obtained in the step (3) as 2 three-channel data, and constructing a multi-mode data fusion travelable area detection network (namely a multilayer convolution network). As shown in fig. 5, the multilayer convolutional network adopts a dual encoder (the dual encoder has the same structure but does not share parameters) and a single decoder structure, the RGB image and the dense point cloud image are respectively used as original inputs, the two feature maps of the same layer are cross-fused by 1 × 1 convolution, and the result is used as the input of the next layer of convolutional network; inputting an output result obtained by the encoder as a pyramid pooling module to obtain a final characteristic diagram output; and recovering the resolution of the output result of the pyramid pooling module through a decoder, calculating the probability that each pixel belongs to the travelable area by using a Sigmoid function, and judging that the pixel belongs to the travelable area when the probability is greater than a set threshold. The multilayer convolution network combines the cavity convolution and pyramid pooling modules, and can rapidly increase the receptive field and aggregate multi-scale context information.
In the process of supervised learning, the loss function is designed as follows, the detection results of the area which is difficult to detect and the non-road area are focused, the detection accuracy is improved, and the driving safety of the vehicle is ensured.
Figure BDA0002843597470000065
The method comprises the following steps that (1) y and (0) y respectively represent positive and negative samples, a positive sample is a road area, a negative sample is a non-road area, a difficult-to-detect area refers to an area which is difficult to detect, and for the positive sample, the detection result tends to be non-road; for negative samples, the detection result is road-prone. y' represents the detection probability, and α and γ are fixed constants, both of which take the value of 2.
And recovering the resolution of the feature map through a decoder, calculating the probability that each pixel belongs to the road by using a Sigmoid layer, and judging that the pixel belongs to the travelable area when the probability is greater than a set threshold (such as 0.5).
Example 1
The present embodiment mainly compares the performance indexes of the joint bilateral filtering upsampling algorithm JBU and the adaptive upsampling method based on the edge strength information in the present invention. In the embodiment, the sparse point cloud image is obtained by 5 times of downsampling the depth true value image, and the upsampling effects of the two methods are compared. Fig. 3(a), (b) show the upsampling result of the JBU and the upsampling result of the method of the present invention, respectively. It can be found that the method of the present invention can better prevent edge blurring while reducing reconstruction errors.
Example 2
In the embodiment, the detection performance of the drivable area of the single image data network, the single point cloud data network and the multi-mode data fusion network is compared mainly through the KITTI data set, and the detection results of the three networks are shown in fig. 4, so that the detection method of the drivable area of the multi-mode data fusion network can be intuitively seen, the accuracy of road detection can be further improved, the false detection of vehicles can be avoided to a great extent, and the reliability of boundary detection is improved.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (5)

1. A multi-mode data fusion travelable area detection method based on point cloud upsampling is characterized in that a camera and a laser radar are calibrated through a joint calibration algorithm, a point cloud is projected to an image plane to obtain a sparse point cloud picture, edge intensity information is calculated by utilizing a pixel local window, a point cloud upsampling scheme is selected in a self-adaptive mode, and a dense point cloud picture is obtained; and performing feature extraction and cross fusion on the obtained dense point cloud picture and the RGB image to realize rapid detection of the travelable area.
2. The method for detecting the multi-modal data fusion travelable region based on point cloud upsampling as claimed in claim 1, wherein the calculating of the edge intensity information by using the pixel local window is specifically as follows: for each pixel, calculating edge intensity information according to the following formula by using the point cloud distance in the pixel local window, and when the edge intensity information is greater than a specified threshold value tau, considering the pixel to be in an edge area, otherwise, considering the pixel to be in a non-edge area:
Figure FDA0002843597460000011
where, σ denotes the standard deviation calculation,
Figure FDA0002843597460000012
represents the average distance of the point clouds in the window, and lambda is a fixed parameter.
3. The method for detecting the multi-modal data fusion travelable area based on point cloud upsampling as claimed in claim 2, wherein the adaptively selected point cloud upsampling scheme is specifically as follows: for the non-edge area pixels, directly utilizing a spatial Gaussian kernel to calculate a weighting result in a local window of the non-edge area pixels; for the edge area pixels, firstly, the weights of all points in a local window are calculated independently by jointly using space and color Gaussian kernels; secondly, dividing the point cloud into a foreground point and a background point according to the average depth of the point cloud in the window, counting the number and the weight sum of the two types of points in the local window, and adjusting the weight of each point; and finally, weighting each point in the local window by using the updated weight to complete the calculation of the spatial position information of the pixel to be calculated.
4. The method for detecting the multi-modal data fusion travelable area based on point cloud upsampling as claimed in claim 1, wherein the obtained dense point cloud image and the RGB image are subjected to feature extraction and cross fusion, specifically: simultaneously taking the RGB image and the dense point cloud image as input, utilizing a multilayer convolution network to carry out feature extraction and cross fusion, giving emphasis to detection results of a difficult detection area and a non-road area by a loss function, and outputting detection probability of a drivable area;
the loss function is as follows:
Figure FDA0002843597460000021
wherein y is 1 and y is 0, the positive and negative samples are respectively represented, the positive sample is a road area, and the negative sample is a non-road area; the hard detection area refers to an area which is difficult to detect, and for a positive sample, the detection result tends to be non-road; for negative samples, the detection result is road-prone; y' represents the probability of being judged as a road region, and α and γ are fixed constants.
5. The method for detecting the multi-modal data fusion travelable area based on point cloud upsampling as claimed in claim 4, wherein the multi-layer convolution network structure adopts a double-encoder and single-decoder structure, the two encoders have the same structure but do not share parameters, the RGB image and the dense point cloud image are respectively used as original input, for the two feature maps output by the same layer of encoder, 1 x 1 convolution is used for cross fusion, the fusion result is used as the input of the next layer of convolution, and the downsampled feature map obtained by the double encoders is input into the pyramid pooling module to obtain the final feature map output;
and recovering the resolution of the output result of the pyramid pooling module through a decoder, calculating the probability that each pixel belongs to the travelable area by using a Sigmoid function, and judging that the pixel belongs to the travelable area when the probability is greater than a set threshold.
CN202011501003.5A 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling Active CN112731436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011501003.5A CN112731436B (en) 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011501003.5A CN112731436B (en) 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling

Publications (2)

Publication Number Publication Date
CN112731436A true CN112731436A (en) 2021-04-30
CN112731436B CN112731436B (en) 2024-03-19

Family

ID=75603282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011501003.5A Active CN112731436B (en) 2020-12-17 2020-12-17 Multi-mode data fusion travelable region detection method based on point cloud up-sampling

Country Status (1)

Country Link
CN (1) CN112731436B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569803A (en) * 2021-08-12 2021-10-29 中国矿业大学(北京) Multi-mode data fusion lane target detection method and system based on multi-scale convolution
CN113945947A (en) * 2021-10-08 2022-01-18 南京理工大学 Method for detecting passable area of multi-line laser radar point cloud data
CN114677315A (en) * 2022-04-11 2022-06-28 探维科技(北京)有限公司 Image fusion method, device, equipment and medium based on image and laser point cloud
CN116343159A (en) * 2023-05-24 2023-06-27 之江实验室 Unstructured scene passable region detection method, device and storage medium
CN116416586A (en) * 2022-12-19 2023-07-11 香港中文大学(深圳) Map element sensing method, terminal and storage medium based on RGB point cloud

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050013486A1 (en) * 2003-07-18 2005-01-20 Lockheed Martin Corporation Method and apparatus for automatic object identification
JP2012248004A (en) * 2011-05-27 2012-12-13 Toshiba Corp Image processing system, image recognition device and method
WO2015010451A1 (en) * 2013-07-22 2015-01-29 浙江大学 Method for road detection from one image
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data
CN110705342A (en) * 2019-08-20 2020-01-17 上海阅面网络科技有限公司 Lane line segmentation detection method and device
CN110827397A (en) * 2019-11-01 2020-02-21 浙江大学 Texture fusion method for real-time three-dimensional reconstruction of RGB-D camera
CN111274976A (en) * 2020-01-22 2020-06-12 清华大学 Lane detection method and system based on multi-level fusion of vision and laser radar
CN111986164A (en) * 2020-07-31 2020-11-24 河海大学 Road crack detection method based on multi-source Unet + Attention network migration
CN112069870A (en) * 2020-07-14 2020-12-11 广州杰赛科技股份有限公司 Image processing method and device suitable for vehicle identification

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050013486A1 (en) * 2003-07-18 2005-01-20 Lockheed Martin Corporation Method and apparatus for automatic object identification
JP2012248004A (en) * 2011-05-27 2012-12-13 Toshiba Corp Image processing system, image recognition device and method
WO2015010451A1 (en) * 2013-07-22 2015-01-29 浙江大学 Method for road detection from one image
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model
CN110705342A (en) * 2019-08-20 2020-01-17 上海阅面网络科技有限公司 Lane line segmentation detection method and device
CN110827397A (en) * 2019-11-01 2020-02-21 浙江大学 Texture fusion method for real-time three-dimensional reconstruction of RGB-D camera
CN111274976A (en) * 2020-01-22 2020-06-12 清华大学 Lane detection method and system based on multi-level fusion of vision and laser radar
CN112069870A (en) * 2020-07-14 2020-12-11 广州杰赛科技股份有限公司 Image processing method and device suitable for vehicle identification
CN111986164A (en) * 2020-07-31 2020-11-24 河海大学 Road crack detection method based on multi-source Unet + Attention network migration

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
J. XIAO: "A least-square-based approach to improve the accuracy of laser ranging", 《2014 INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INFORMATION INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), BEIJING, CHINA, 2014》 *
LUCA CALTAGIRONE: "LIDAR–camera fusion for road detection using fully convolutional neural networks", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 *
ZHENZHEN GAO: "Feature enhancing aerial lidar point cloud refinement", 《PROCEEDINGS OF SPIE》 *
唐国维, 王东, 刘显德, 李永树, 何明革: "基于统计测试的道路图象边界提取方法", 大庆石油学院学报, no. 03 *
宋廷强;李继旭;张信耶;: "基于深度学习的高分辨率遥感图像建筑物识别", 计算机工程与应用, no. 08, 31 August 2020 (2020-08-31) *
康国华;张琪;张晗;徐伟证;张文豪;: "基于点云中心的激光雷达与相机联合标定方法研究", 仪器仪表学报, no. 12 *
杨飞: "基于融合分层条件随机场的道路分割模型", 《机器人》 *
胡远志;刘俊生;何佳;肖航;宋佳;: "基于激光雷达点云与图像融合的车辆目标检测方法", 汽车安全与节能学报, no. 04 *
邵帅: "服务机器人的路面识别研究", 《工业控制计算机》 *
项志宇;王伟;: "融合距离与彩色信息的草丛中障碍物检测", 光电工程, no. 03 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569803A (en) * 2021-08-12 2021-10-29 中国矿业大学(北京) Multi-mode data fusion lane target detection method and system based on multi-scale convolution
CN113945947A (en) * 2021-10-08 2022-01-18 南京理工大学 Method for detecting passable area of multi-line laser radar point cloud data
CN114677315A (en) * 2022-04-11 2022-06-28 探维科技(北京)有限公司 Image fusion method, device, equipment and medium based on image and laser point cloud
WO2023197351A1 (en) * 2022-04-11 2023-10-19 探维科技(北京)有限公司 Image fusion method and apparatus based on image and laser point cloud, device, and medium
US11954835B2 (en) 2022-04-11 2024-04-09 Tanway Technology (beijing) Co., Ltd. Methods, devices, apparatuses, and media for image fusion utilizing images and LiDAR point clouds
CN116416586A (en) * 2022-12-19 2023-07-11 香港中文大学(深圳) Map element sensing method, terminal and storage medium based on RGB point cloud
CN116416586B (en) * 2022-12-19 2024-04-02 香港中文大学(深圳) Map element sensing method, terminal and storage medium based on RGB point cloud
CN116343159A (en) * 2023-05-24 2023-06-27 之江实验室 Unstructured scene passable region detection method, device and storage medium

Also Published As

Publication number Publication date
CN112731436B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
CN111274976B (en) Lane detection method and system based on multi-level fusion of vision and laser radar
CN111563415B (en) Binocular vision-based three-dimensional target detection system and method
CN107274445B (en) Image depth estimation method and system
CN112505684B (en) Multi-target tracking method for radar vision fusion under side view angle of severe environment road
CN108416292B (en) Unmanned aerial vehicle aerial image road extraction method based on deep learning
WO2023273375A1 (en) Lane line detection method combined with image enhancement and deep convolutional neural network
CN110807384A (en) Small target detection method and system under low visibility
CN113095277B (en) Unmanned aerial vehicle aerial photography vehicle detection method based on target space distribution characteristics
US20230394829A1 (en) Methods, systems, and computer-readable storage mediums for detecting a state of a signal light
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
CN116486368A (en) Multi-mode fusion three-dimensional target robust detection method based on automatic driving scene
CN112613392A (en) Lane line detection method, device and system based on semantic segmentation and storage medium
CN117095368A (en) Traffic small target detection method based on YOLOV5 fusion multi-target feature enhanced network and attention mechanism
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN115880658A (en) Automobile lane departure early warning method and system under night scene
CN110910497B (en) Method and system for realizing augmented reality map
CN117197438A (en) Target detection method based on visual saliency
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
WO2022193132A1 (en) Image detection method and apparatus, and electronic device
CN114612999A (en) Target behavior classification method, storage medium and terminal
CN114372944B (en) Multi-mode and multi-scale fused candidate region generation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant