CN107167811B - The road drivable region detection method merged based on monocular vision with laser radar - Google Patents

The road drivable region detection method merged based on monocular vision with laser radar Download PDF

Info

Publication number
CN107167811B
CN107167811B CN201710283453.3A CN201710283453A CN107167811B CN 107167811 B CN107167811 B CN 107167811B CN 201710283453 A CN201710283453 A CN 201710283453A CN 107167811 B CN107167811 B CN 107167811B
Authority
CN
China
Prior art keywords
pixel
point
super
road
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710283453.3A
Other languages
Chinese (zh)
Other versions
CN107167811A (en
Inventor
郑南宁
余思雨
刘子熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201710283453.3A priority Critical patent/CN107167811B/en
Publication of CN107167811A publication Critical patent/CN107167811A/en
Application granted granted Critical
Publication of CN107167811B publication Critical patent/CN107167811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a kind of road drivable region detection methods merged based on monocular vision with laser radar, belong to intelligent transportation field.The method that existing unmanned vehicle Approach for road detection is mainly based upon monocular vision, stereoscopic vision, laser sensor and Multi-sensor Fusion etc., exist to illumination not robust, three-dimensional matching is complicated, laser is sparse and overall fusion low efficiency the disadvantages of.Although some methods for having supervision obtain preferable precision, but training process is complicated, and extensive effect is poor.The road drivable region detection method proposed by the present invention merged based on monocular vision with laser radar, use super-pixel and point cloud data fusion, so that Machine self-learning is gone out road area using feature, the road information that each feature obtains is merged to determine final area by Bayesian frame.This method does not need strong assumption information and complicated training process, and generalization and robustness are superior, and speed is exceedingly fast, and precision is high, more easily promotes and uses in practical applications.

Description

The road drivable region detection method merged based on monocular vision with laser radar
Technical field
The invention belongs to the technique studies in intelligent transportation field, are related to one kind and are melted based on monocular vision with laser radar The road drivable region detection method of conjunction.
Background technique
In recent years since, Road Detection is always the important content of unmanned area research.The road being widely used at present Detection method has: monocular vision method, Stereo Vision, laser radar method and the method based on fusion.Wherein, monocular Visible sensation method only considered the visual information of scene, easily be illuminated by the light condition, and weather conditions influence;The method of stereoscopic vision exists Time consumption is huge on three-dimensional reconstruction, is not suitable for practice;The method of laser radar there are point cloud data it is sparse lack Point.Based on the Approach for road detection that Pixel Information and depth information merge both taken full advantage of from camera provide about The information such as texture, the color of scene, and the depth information of laser radar is combined to make up visual information robust does not lack to environment Point overcomes non-fused method low efficiency, it is difficult to carry out real-time operation on efficiency of algorithm, it is difficult to the problem of practice, Therefore the Approach for road detection based on fusion develops rapidly as the first choice of unmanned vehicle Road Detection.Road inspection based on fusion Survey method is a kind of optimal road inspection to grow up on the basis of monocular vision, laser radar method, sensor fusion etc. It surveys.To engineering in practice, especially unmanned vehicle driving in be widely used.
Unmanned vehicle Road Detection has been also divided into measure of supervision and unsupervised approaches.Due to the diversity of information of road surface, scene The variability of the complexity and illumination weather condition of information, robustness and Generalization Capability of the unmanned vehicle for Approach for road detection It is required that very high.Therefore unsupervised unmanned vehicle Approach for road detection is also the important content of unmanned area research.On the one hand, Unsupervised Approach for road detection does not need a large amount of flag data and time-consuming training process, can be according to the feature of extraction Automatically learn road information out, is a kind of method of high generalization ability.On the other hand, the traffic scene of real world is complicated It is changeable, in the case where that can not be the unmanned training sample for providing all scenes, there is the method for supervision encountering and instructing Risk is very big when practicing the scene difference biggish Driving Scene of sample, and unsupervised Approach for road detection is to nearly all Scene robust is suitable for unpiloted practical application.
Summary of the invention
The purpose of the present invention is to provide a kind of roads merged based on monocular vision with laser radar can travel region inspection Survey method.
In order to achieve the above objectives, the invention adopts the following technical scheme.
Firstly, the method for the fusion merging using super-pixel and laser point cloud data, point cloud data is joined according to laser On picture after number labeling projection to super-pixel segmentation, super-pixel method takes full advantage of the textural characteristics of scene, greatly contracts The range of small positioning road area, greatly improves efficiency of algorithm;Secondly, finding the spatial relationship of point, root with triangulation According to the spatial relationship triangle obtained establish non-directed graph and and calculate the normal vector of every bit, classified barrier point according to non-directed graph; Then, this method uses the method based on minimum filtering to define new feature (ray), and finds the initial alternative of road area Region further reduces the detection range of road area, significant increase efficiency of algorithm;By defining new feature (level) The travelable degree of numeralization each point, efficiently utilizes depth information in terms of depth information.In addition, fusion method is also sharp Each feature (color characteristic, level are merged that is, based on the Bayesian frame of self study with a kind of unsupervised fusion method Feature, normal direction measure feature, strength characteristic) probabilistic information of alternative road area that learns, this efficiency of algorithm is high, robust Performance is strong.
The super-pixel merges that specific step is as follows with laser point cloud data:
The picture that camera acquires is carried out using the method for the improved linear iteraction cluster of existing jointing edge segmentation Picture segmentation is N number of super-pixel, each super-pixel p by super-pixel segmentationc=(xc,yc,zc,1)TComprising several pixels, Middle xc,yc,zcIndicate the average value of the location information in the super-pixel under the camera coordinates system of all pixels point, meanwhile, these The RGB of pixel is unified for the average RGB of all pixels point in the super-pixel.Recycle existing calibration technique by laser The every bit p that radar obtainsl=(xl,yl,zl,1T) project on the picture after super-pixel segmentation, finally obtain point setWherein Pi=(xi,yi,zi,ui,vi), xi,yi,ziIndicate the location information under the laser coordinate system of the point, (ui,vi) indicate location information under the corresponding camera coordinates system of point.It is constrained finally by concurrent, only retains and be projected in The laser point of super-pixel adjacent edges.
Method based on minimum filtering defines the initial alternative area that new feature (ray) finds road area;Specific step It is rapid as follows:
Firstly, the initial alternative area for defining the road area isWherein, SiIt representsIt is super-pixel SiThe set for all pixels point for being included defines IDRMFor " direction ray map ", and point set will Pi=(xi,yi,zi,ui,vi) (ui,vi) coordinate transformation is to the central point (P of picture last linebase) sat for the pole of origin Under mark, then haveIt is point set(ui,vi) proper subclass, wherein Indicate i-th A point belongs to h angle,Indicate beIn barrier point set, calculate Method sees below text and flow chart.
Secondly, to solve the problems, such as laser beam leakage, the I handled using the method that minimum filtersDRMIt is expected that IDRM, finally obtain
Define new feature (level);Specific step is as follows:
Define every bitLevel feature beAlgorithm is shown in flow chart, and Super-pixel S is obtained in conjunction with super-pixeliLevel feature L (Si):
Specific step is as follows to merge for the Bayesian frame using self study:
Firstly, being in conjunction with initial alternative area4 kinds of feature unsupervised learnings of the person of being utilized respectively alternative area it is general Rate.
It is for initial alternative areaIn super-pixel point Si, SiEach pixel P for includingi=(xi,yi,zi, ui,vi) rgb value unified, utilize Gaussian parameter μcWithSelf study color characteristic, formula are as follows:
θ=45 °
Utilize Gaussian parameter μlWithSelf study super-pixel SiLevel feature L (Si) formula are as follows:
Utilize Gaussian parameter μnWithSelf study super-pixel SiNormal direction measure feature N (Si) formula are as follows:
Define Sg (Si) it is across super-pixel SiRay quantity, self study super-pixel SiStrength characteristic Sg (Si) Formula are as follows:
Finally, establishing Bayesian frame merges 4 kinds of features, formula is as follows:
Wherein, p (Si=R | Obs) indicate super-pixel SiBelong to the probability of road area, Obs indicates to be based on this 4 kinds of features Observation.
The beneficial effects of the present invention are embodied in:
First, which greatly limits the practicability of these algorithms and meters since traditional fusion method is using global fusion Calculate efficiency.The present invention proposes merging using super-pixel and laser point cloud data.This method greatly reduces the standby of road area Select range, significant increase efficiency of algorithm.Second, therefore, new feature (ray) proposed by the present invention finds road area Initial alternative area further reduces the detection range of road area, significant increase efficiency of algorithm.Third, proposed by the present invention Level feature quantizes the travelable degree of each point in terms of depth information, overcomes the Sparse Problems of depth information, effectively Depth information is utilized in ground, contributes arithmetic accuracy very big.Fourth, proposed by the present invention quantify super-pixel using strength characteristic With the syncretic relation of depth information, the near big and far smaller problem of visual information has been fully considered, arithmetic accuracy has been contributed very big.Therefore Algorithm has more important research significance and extensive engineering application value.Fifth, the Bayesian frame of self study, fusion The probabilistic information for the alternative road area that each feature learning arrives, this efficiency of algorithm is high, and robust performance is strong
Detailed description of the invention
Fig. 1 is the road drivable region detection method functional block diagram merged based on monocular vision with laser radar;
Fig. 2 is to obtain the algorithm flow chart of ray feature;
Fig. 3 be by not using minimum filtering processing ray leakage (under) with after use (on) initial alternative area effect Figure;
Fig. 4 is to obtain the algorithm flow chart of level feature;
Fig. 5 is the alternative road area probability distribution effect picture obtained by color characteristic self study;
Fig. 6 is the alternative road area probability distribution effect picture obtained by level feature self study;
Fig. 7 is the alternative road area probability distribution effect picture obtained by normal direction measure feature self study;
Fig. 8 is the alternative road area probability distribution effect picture obtained by strength characteristic self study;
Fig. 9 is the probability distribution graph for the final area that the fusion of the Bayesian frame of self study obtains;
Specific embodiment
Shown in referring to Fig.1, camera is adopted using the method for the improved linear iteraction cluster of existing jointing edge segmentation The picture of collection carries out super-pixel segmentation, is N number of super-pixel, each super-pixel p by picture segmentationc=(xc,yc,zc,1)TIf comprising A pixel is done, wherein xc,yc,zcIndicate being averaged for the location information in the super-pixel under the camera coordinates system of all pixels point Value, meanwhile, the RGB of these pixels is unified for the average RGB of all pixels point in the super-pixel.Recycle existing mark Determine technology and spin matrixAnd transformed matrixTransformed matrix is obtained according to formula (1)
Utilize spin matrixWithThe transforming relationship for establishing 2 coordinate systems, such as formula (2):
The every bit p that laser radar is obtainedl=(xl,yl,zl,1)TIt is such as public on picture after projecting to super-pixel segmentation Formula (3):
Obtain point setWherein Pi=(xi,yi,zi,ui,vi), xi,yi,ziUnder the laser coordinate system for indicating the point Location information, (ui,vi) indicate location information under the corresponding camera coordinates system of point.Finally, only retaining super-pixel side Laser point near edge.
Using data fusion classification barrier point, mapping relations ob (P is obtainedi), ob (Pi)=1 indicates PiFor barrier point, instead Be 0.For PiCoordinate system (ui,vi) triangulation (Delaunay triangulation) is used, obtain numerous spaces Triangle and generation non-directed graphE indicate figure in node PiThere are the set at the edge of incidence relation.It rejects and sits Mark system (ui,vi) under Euclidean distance be unsatisfactory for the edge (P of formula (4)i,Pj):
||Pi-Pj| | < ε ... ... ... ... ... ... ... ... ... ... ... ... ... (4)
It is defined as and PiThe collection of the point of connection is combined into Nb (Pi), then it is { (u with the surface of related spatial trianglej, vj) | j=iorPj∈Nb(Pi)}.Calculate the normal vector of each spatial triangle, it is clear that work as PiThe spatial triangle of surrounding is got over It is flat close to ground, PiA possibility that as non-barrier point, is bigger, we take PiThe normal vector of the spatial triangle of surrounding is put down Mean value is as PiNormal vectorFormula (5) indicates ob (Pi) judgment method:
Method based on minimum filtering defines ray feature and finds the initial alternative area of road areaFirstly, root Obtained according to algorithm flow chart as shown in Figure 2 " direction ray map " -- IDRM, whereinWhat expression was calculated according to previous step barrier point classification methodIn barrier point Set, Indicate PiBelong to h angle.Algorithm is by Pi=(xi,yi,zi,ui,vi) (ui,vi) sit Mark is transformed into the central point (P with picture last linebase) be origin polar coordinates under, then haveIt is point set(ui,vi) proper subclass.Secondly, as shown in figure 3, due to laser data sparsity, need to handle ray leakage The problem of, this method I that innovatively the minimum method filtered of use is handledDRMObtain desired IDRM.In conjunction with super-pixel point It cuts, the initial alternative area for defining the road area isWherein, SiIt representsIt is super picture Plain SiThe set for all pixels point for being included, the final super-pixel that merges obtain
Level feature is defined, Fig. 4 provides the every bit for calculating and belonging to h angle Level featureFormula (6) indicates, super-pixel S is obtained in conjunction with super-pixeliLevel feature L (Si):
Such as Fig. 5, the travelable degree probabilistic information for the alternative road area that color characteristic obtains is utilized.For initial alternative RegionIn super-pixel point Si, SiEach pixel P for includingi=(xi,yi,zi,ui,vi) rgb value unified , since RGB color is to illumination condition and weather conditions not robust, therefore color space conversion method is utilized, it will be empty in RBG Between original image I be converted into the image I under illuminant-invariant color spacelog, such as formula (7):
Wherein Ilog(u, v) is in IlogPixel value under coordinate system (u, v), IR,IG,IBIndicate that the rgb value of I, θ indicate just Meet at the not varied angle of illumination variation line.Formula (8) utilizes Gaussian parameter μcWithSelf study color characteristic is to obtain alternative road The travelable degree probabilistic information in road region:
Such as Fig. 6, the travelable degree probabilistic information for the alternative road area that level feature obtains is utilized.Formula (9) utilizes Gaussian parameter μlWithSelf study super-pixel SiLevel feature L (Si):
Such as Fig. 7, the travelable degree probabilistic information for the alternative road area that normal direction measure feature obtains is utilized.It calculatesIn Super-pixel SiNormal direction measure feature N (Si), i.e. SiIn the minimum point of normal vector height coordinate valueSuch as formula (10):
Formula (11) utilizes Gaussian parameter μnWithSelf study super-pixel SiNormal direction measure feature N (Si):
Such as Fig. 8, the travelable degree probabilistic information for the alternative road area that strength characteristic obtains is utilized.Sg(Si) be across Super-pixel SiRay quantity, self study super-pixel SiStrength characteristic Sg (Si) such as formula (12):
Finally, the fusion for establishing the Bayesian frame that 4 kinds of features of Bayesian frame fusion obtain self study obtains such as Fig. 9 Final area probability distribution graph, such as formula (13):
Wherein, p (Si=R | Obs) indicate super-pixel SiBelong to the probability of road area, Obs indicates to be based on this 4 kinds of features Observation, this method completes Road Detection task well as can be seen from Figure 9.
In order to prove the advantage of this method, we utilize the number of varying environment in 3 on ROAD-KITTI benchmark According to collection, the urban environment (Urban Marked, UM) of label, multiple labeling urban environment (Urban Multiple Marked, UMM) and unlabelled urban environment (Urban Unmarked, UU) tests this method, from maximum F-measure (Max F- Measure, MaxF), mean accuracy (Average Precision, AP), precision (Precision, PRE), recall rate (Recall, REC), false positive rate (False Positive Rate, FPR) and false negative rate (False Negative Rate, FNR) this six indexs are analyzed.While analysis, experiment is compared, and has been announced at present in ROAD- The method MixedCRF and fusion method RES3D- of best effects are achieved on KITTI benchmark data set using laser Velo comparison, comparing result are shown in Table 1--4.
Table 1 is this method (Ours Test), the comparative experiments knot of MixedCRF, RES3D-Velo on UM data set Fruit:
Contrast and experiment of the table 1 on UM data set
Table 2 is this method (Ours Test), the comparative experiments knot of MixedCRF, RES3D-Velo on UMM data set Fruit:
Contrast and experiment of the table 2 on UMM data set
Table 3 is this method (Ours Test), the comparative experiments knot of MixedCRF, RES3D-Velo on UU data set Fruit:
Contrast and experiment of the table 3 on UU data set
Table 4 is this method (Ours Test), and (i.e. UM, UMM, UU are integrally examined in URBAN by MixedCRF, RES3D-Velo Consider) comparing result of the average value of experimental result on data set:
Contrast and experiment of the table 4 in URBAN data set
MixedCRF is the method for needing training, and this method has obtained similar under conditions of not needing any training Precision, and highest precision is achieved in this index of AP, illustrate the superiority of this method.
In order to show the superiority of the fusion of self study Bayesian frame used by this method, in ROAD-KITTI Using the data set of varying environment in 3 on benchmark, the urban environment (Urban Marked, UM) of label, multiple labeling Urban environment (Urban Multiple Marked, UMM) and unlabelled urban environment (Urban Unmarked, UU) test This method, from maximum F-measure (Max F-measure, MaxF), mean accuracy (Average Precision, AP), essence It spends (Precision, PRE), recall rate (Recall, REC), false positive rate (False Positive Rate, FPR), and false Negative rate (False Negative Rate, FNR) this six indexs, analyze it is single obtained using ray feature it is initial alternative Region (Initial), color characteristic (Color), strength characteristic (Strength), level feature and normal direction measure feature (Normal) precision, (Fusion) accuracy comparison is merged with Bayesian frame, and comparing result is shown in Table 5--8.
Table 5 is the single initial alternative area (Initial) obtained using ray feature, color characteristic (Color), intensity Feature (Strength), level feature and normal direction measure feature (Normal), (Fusion) is merged with Bayesian frame in UM number According to the comparing result on collection:
Table 5Comparison on UM Training Set (BEV)
Table 6 is the single initial alternative area (Initial) obtained using ray feature, color characteristic (Color), intensity Feature (Strength), level feature and normal direction measure feature (Normal), (Fusion) is merged with Bayesian frame in UMM number According to the comparing result on collection:
Table 6Comparison on UMM Training Set (Bev)
Table 7 is the single initial alternative area (Initial) obtained using ray feature, color characteristic (Color), intensity Feature (Strength), level feature and normal direction measure feature (Normal), (Fusion) is merged with Bayesian frame in UU number According to the comparing result on collection:
Table 7Comparison on UU Training Set (Bev)
Table 8 is the single initial alternative area (Initial) obtained using ray feature, color characteristic (Color), intensity Feature (Strength), level feature and normal direction measure feature (Normal), (Fusion) are merged with Bayesian frame in UM, The comparing result of the average value of experimental result on UMM, UU data set:
Table 8Comparison on URBAN Training Set (BEV)
By table 4 and table 8 it is found that being taken based on the road drivable region detection method that monocular vision is merged with laser radar Obtaining current full accuracy AP, AP is also the most important index for measuring detection method, is also achieved in other indexs good Advantage, therefore this method is suitable for practical application.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, cannot recognize Determine a specific embodiment of the invention and be only limitted to this, for those of ordinary skill in the art to which the present invention belongs, not Be detached from present inventive concept under the premise of, several simple deduction or replace can also be made, all shall be regarded as belonging to the present invention by The claims submitted determine scope of patent protection.

Claims (2)

1. a kind of road drivable region detection method merged based on monocular vision with laser radar, it is characterised in that:
Firstly, the method for the fusion merging using super-pixel and laser point cloud data, by point cloud data according to laser parameter mark Surely on the picture after projecting to super-pixel segmentation;
The super-pixel merges that specific step is as follows with laser point cloud data:
Super-pixel segmentation is carried out to the picture that camera acquires using linear iteraction cluster, is N number of super-pixel by picture segmentation, each Super-pixel pc=(xc,yc,zc,1)TComprising several pixels, wherein xc,yc,zcIndicate the phase of all pixels point in the super-pixel The average value of location information under machine coordinate system, meanwhile, the RGB of these pixels is unified for all pixels point in the super-pixel Average RGB, recycle the calibration technique every bit p that obtains laser radarl=(xl,yl,zl,1)TProject to super-pixel segmentation On picture afterwards, laser merge with point cloud, proposes concurrent constraint, and concurrent, which constrains, to be referred to, in super-pixel and laser point cloud data It is only retained in the laser point of super-pixel adjacent edges during fusion, finally obtains point setWherein Pi=(xi, yi,zi,ui,vi), xi,yi,ziIndicate the location information under the laser coordinate system of the point, (ui,vi) indicate the corresponding phase of point Location information under machine coordinate system.
Secondly, finding the spatial relationship of point with triangulation, non-directed graph and simultaneously is established according to the spatial relationship triangle obtained The normal vector for calculating every bit, according to non-directed graph classification barrier point;
Then, the initial alternative area that road area is found using the method based on minimum filtering, further reduces road area Detection range;It is quantized in terms of depth information the travelable degree of each point by defining new feature level, in addition, melting Conjunction method also utilizes a kind of unsupervised fusion method to merge each feature, i.e. color that is, based on the Bayesian frame of self study Feature, level feature, normal direction measure feature, the probabilistic information for the alternative road area that strength characteristic learns;Level mark sheet Show that the travelable degree of corresponding points, calculating process are as follows:
1) forIn i-th point, initialize h angle all the points level featureWhereinIt is point setProper subclass, wherein Indicate that belong to h angle at i-th point;
2) forIn i-th point, define ob (Pi) it is point PiThe planarization of circumferential surface, thenTable Show that the point is barrier point, otherwise be 0, if ob (Pi (h))=1 is initializingOn the basis of constantly updateFor Value originally adds adjacent 2 point Pi (h)WithDifference in height, i.e.,
If 3) i≤N(h), return to 2;
If 4) h=H, terminate;Otherwise, 1 is returned;
And super-pixel S is obtained in conjunction with super-pixeliLevel feature L (Si):
Wherein SiIt is super-pixel SiThe set for all pixels point for being included, definitionIt is described to use self study Bayes frame Frame merges;Specific step is as follows:
Self study -- color characteristics are carried out using four kinds of features, level feature, normal direction measure feature, strength characteristic, firstly, in conjunction with Initially alternative area isIt is utilized respectively the probability that these four features learn alternative area unsupervisedly;
It is for initial alternative areaIn super-pixel point Si, SiEach pixel P for includingi=(xi,yi,zi,ui,vi) Rgb value unified, utilize Gaussian parameter μcWithSelf study color characteristic, formula are as follows:
Utilize Gaussian parameter μlWithSelf study super-pixel SiLevel feature L (Si) formula are as follows:
Utilize Gaussian parameter μnWithSelf study super-pixel SiNormal direction measure feature N (Si) formula are as follows:
Define Sg (Si) it is across super-pixel SiRay ray quantity, self study super-pixel SiStrength characteristic Sg (Si) public affairs Formula are as follows:
Finally, establishing Bayesian frame merges 4 kinds of features, formula is as follows:
Wherein, p (Si=R | Obs) indicate super-pixel SiBelong to the probability of road area, Obs indicates the sight based on these four features It surveys.
2. the road drivable region detection method merged according to claim 1 based on monocular vision with laser radar, Be characterized in that: the initial alternative area of road area is found in the method definition based on minimum filtering;Specific step is as follows:
Firstly, the initial alternative area for defining the road area isDefine IDRMFor " direction is penetrated Line chart (direction ray map) ", and by point set Pi=(xi,yi,zi,ui,vi) (ui,vi) coordinate transformation to picture most The central point P of bottom a linebaseUnder the polar coordinates of origin, then to haveIt is point setIt is very sub Collection, wherein Indicate that belong to h angle at i-th point,It indicates It isIn barrier point set, calculating process is as follows:
1) I is initializedDRMFor size full 0 matrix identical with picture is originally inputted;
2) for the set of all the points of h angleFind obstacle point set therein
If 3)Construct containerOtherwise,
4) willIt is included into IDRM
If 5) h=H, terminate;Otherwise, 2 are returned;
Secondly, the problem of being " ray leakage ", carry out subsequent processing using the method for minimum filtering, obtain desired IDRM, finally It obtains
CN201710283453.3A 2017-04-26 2017-04-26 The road drivable region detection method merged based on monocular vision with laser radar Active CN107167811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710283453.3A CN107167811B (en) 2017-04-26 2017-04-26 The road drivable region detection method merged based on monocular vision with laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710283453.3A CN107167811B (en) 2017-04-26 2017-04-26 The road drivable region detection method merged based on monocular vision with laser radar

Publications (2)

Publication Number Publication Date
CN107167811A CN107167811A (en) 2017-09-15
CN107167811B true CN107167811B (en) 2019-05-03

Family

ID=59813240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710283453.3A Active CN107167811B (en) 2017-04-26 2017-04-26 The road drivable region detection method merged based on monocular vision with laser radar

Country Status (1)

Country Link
CN (1) CN107167811B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992850B (en) * 2017-12-20 2020-01-14 大连理工大学 Outdoor scene three-dimensional color point cloud classification method
CN108519773B (en) * 2018-03-07 2020-01-14 西安交通大学 Path planning method for unmanned vehicle in structured environment
CN108509918B (en) * 2018-04-03 2021-01-08 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108932475B (en) * 2018-05-31 2021-11-16 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
CN110738223B (en) * 2018-07-18 2022-04-08 宇通客车股份有限公司 Point cloud data clustering method and device of laser radar
FR3085022B1 (en) * 2018-08-16 2020-10-16 Psa Automobiles Sa PROCESS FOR DETERMINING A TRUST INDEX ASSOCIATED WITH AN OBJECT DETECTED BY A SENSOR IN THE ENVIRONMENT OF A MOTOR VEHICLE.
CN109358335B (en) * 2018-09-11 2023-05-09 北京理工大学 Range finder combining solid-state area array laser radar and double CCD cameras
CN109239727B (en) * 2018-09-11 2022-08-05 北京理工大学 Distance measurement method combining solid-state area array laser radar and double CCD cameras
CN109444911B (en) * 2018-10-18 2023-05-05 哈尔滨工程大学 Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
DK180774B1 (en) * 2018-10-29 2022-03-04 Motional Ad Llc Automatic annotation of environmental features in a map during navigation of a vehicle
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109858460B (en) * 2019-02-20 2022-06-10 重庆邮电大学 Lane line detection method based on three-dimensional laser radar
CN109696173A (en) * 2019-02-20 2019-04-30 苏州风图智能科技有限公司 A kind of car body air navigation aid and device
CN109917419B (en) * 2019-04-12 2021-04-13 中山大学 Depth filling dense system and method based on laser radar and image
CN110378196B (en) * 2019-05-29 2022-08-02 电子科技大学 Road visual detection method combining laser point cloud data
CN110488320B (en) * 2019-08-23 2023-02-03 南京邮电大学 Method for detecting vehicle distance by using stereoscopic vision
CN110781720B (en) * 2019-09-05 2022-08-19 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN111582280B (en) * 2020-05-11 2023-10-17 吉林省森祥科技有限公司 Data deep fusion image segmentation method for multispectral rescue robot
CN111898687B (en) * 2020-08-03 2021-07-02 成都信息工程大学 Radar reflectivity data fusion method based on Dilongnie triangulation
CN112633326B (en) * 2020-11-30 2022-04-29 电子科技大学 Unmanned aerial vehicle target detection method based on Bayesian multi-source fusion
CN112749662B (en) * 2021-01-14 2022-08-05 东南大学 Method for extracting travelable area in unstructured environment based on laser radar
CN113284163B (en) * 2021-05-12 2023-04-07 西安交通大学 Three-dimensional target self-adaptive detection method and system based on vehicle-mounted laser radar point cloud
CN115984583B (en) * 2022-12-30 2024-02-02 广州沃芽科技有限公司 Data processing method, apparatus, computer device, storage medium, and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN105989334A (en) * 2015-02-12 2016-10-05 中国科学院西安光学精密机械研究所 Monocular vision-based road detection method
CN106529417A (en) * 2016-10-17 2017-03-22 北海益生源农贸有限责任公司 Visual and laser data integrated road detection method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753876B2 (en) * 2001-12-21 2004-06-22 General Electric Company Method for high dynamic range image construction based on multiple images with multiple illumination intensities
CN103760569B (en) * 2013-12-31 2016-03-30 西安交通大学 A kind of drivable region detection method based on laser radar
CN104569998B (en) * 2015-01-27 2017-06-20 长春理工大学 The detection method and device in the vehicle safe driving region based on laser radar

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN105989334A (en) * 2015-02-12 2016-10-05 中国科学院西安光学精密机械研究所 Monocular vision-based road detection method
CN106529417A (en) * 2016-10-17 2017-03-22 北海益生源农贸有限责任公司 Visual and laser data integrated road detection method

Also Published As

Publication number Publication date
CN107167811A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
CN107167811B (en) The road drivable region detection method merged based on monocular vision with laser radar
Caraffi et al. Off-road path and obstacle detection using decision networks and stereo vision
Zhou et al. Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain
Sun et al. Aerial 3D building detection and modeling from airborne LiDAR point clouds
Kühnl et al. Monocular road segmentation using slow feature analysis
CN110956651A (en) Terrain semantic perception method based on fusion of vision and vibrotactile sense
Alaba et al. Deep learning-based image 3-d object detection for autonomous driving
CN107491071B (en) Intelligent multi-robot cooperative mapping system and method thereof
CN103632167B (en) Monocular vision space recognition method under class ground gravitational field environment
CN114724120B (en) Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion
CN104850850A (en) Binocular stereoscopic vision image feature extraction method combining shape and color
CN110675415B (en) Road ponding area detection method based on deep learning enhanced example segmentation
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
Gao et al. Fine-grained off-road semantic segmentation and mapping via contrastive learning
CN106529417A (en) Visual and laser data integrated road detection method
CN111914615A (en) Fire-fighting area passability analysis system based on stereoscopic vision
CN103198475A (en) Full-focus synthetic aperture perspective imaging method based on multilevel iteration visualization optimization
CN114782729A (en) Real-time target detection method based on laser radar and vision fusion
Wang et al. Multi-cue road boundary detection using stereo vision
Yan et al. Sparse semantic map building and relocalization for UGV using 3D point clouds in outdoor environments
CN116597122A (en) Data labeling method, device, electronic equipment and storage medium
Laupheimer et al. The importance of radiometric feature quality for semantic mesh segmentation
Sanberg et al. Extending the stixel world with online self-supervised color modeling for road-versus-obstacle segmentation
CN105160324B (en) A kind of vehicle checking method based on space of components relationship
CN103646397A (en) Real-time synthetic aperture perspective imaging method based on multi-source data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant