CN103279748B - A kind of Approach for road detection based on SIFT – COF characteristic light stream - Google Patents
A kind of Approach for road detection based on SIFT – COF characteristic light stream Download PDFInfo
- Publication number
- CN103279748B CN103279748B CN201310218603.4A CN201310218603A CN103279748B CN 103279748 B CN103279748 B CN 103279748B CN 201310218603 A CN201310218603 A CN 201310218603A CN 103279748 B CN103279748 B CN 103279748B
- Authority
- CN
- China
- Prior art keywords
- feature
- sift
- light stream
- road
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention discloses a kind of Approach for road detection based on SIFT – COF characteristic light stream, first gather the frame image information on road, and distinguish road area and non-rice habitats region; Non-rice habitats region is defined as ROI, then extracts the feature in ROI, feature extraction is with the hierarchical structure of the combination of SIFT feature and Harris Corner Feature; And pass through the characteristic matching morphogenesis characters light stream of interframe, judge this feature locations according to the calculating of characteristic light stream; Finally determining can traffic areas and Fei Ke traffic areas.The present invention establishes the hierarchical structure of SIFT feature and Harris Corner Feature, the method achieve the mutual supplement with each other's advantages of SIFT feature and Harris Corner Feature, utilize the Scale invariant characteristic of SIFT feature and the uniform distribution properties of Harris Corner Feature, the light stream of dissimilar unstructured road detects the validity that contrast test shows this method.
Description
Technical field
The present invention relates to the independent navigation field of intelligent vehicle, specifically a kind of SIFT – COF characteristic light stream detection method, for realizing the accuracy of Road Detection.
Background technology
According to unstructured road, the overwhelming majority of road area has similar feature, and has the difference in feature with non-rice habitats region.Therefore, road area segmentation is combined with Method for Road Boundary Detection, accuracy and the robustness of Road Detection can be improved.But, because the material of road is formed different (such as rubble, earth or sand mix), the impact of ambient light illumination, road regional area there will be the feature differing from other regions of road itself, this is presented as that in Road Detection process non-rice habitats region mixes the situation among road area, is therefore necessary that distinguishing these non-rice habitats regions is road or barrier.Nonvisual sensor data are combined with vision data and can effectively overcome this problem, but there is following difficulty: can see that unlimited distance is different in theory from vision sensor, nonvisual sensor may cannot detect the existence of barrier due to the problem of detection angle (as single line laser device), also can only may detect in-plant barrier due to detection range reason.Use 64 expensive line laser radars the medium sized vehicle obstacle of about 100m can be detected, but its price will far away higher than the cost of general vehicle.Based on above-mentioned consideration, mainly solve the Road Detection based on monocular vision herein, the independent navigation attempted by being intelligent vehicle to the research of this problem provides cheap solution.
Summary of the invention
Technical matters to be solved by this invention be to provide a kind of accuracy high, realize the low Approach for road detection based on SIFT – COF characteristic light stream of cost.
A kind of Approach for road detection based on SIFT – COF characteristic light stream of the present invention, it comprises the following steps:
1) frame image information in sensor collection vehicle road ahead is utilized, and the road area distinguished in two field picture and non-rice habitats region;
2) the non-rice habitats region mixed in road area is defined as region of interest ROI (RegionsofInterest is called for short ROI)
3) then the feature in ROI to be extracted, and by the characteristic matching morphogenesis characters light stream of interframe, judge in the plane that this feature is positioned at road according to the calculating of characteristic light stream or be positioned at outside road plane;
4) whether was static feature according to vehicle in this feature of motion determination in a upper moment, the static and feature region be positioned on road plane will be regarded as can traffic areas, otherwise will be regarded as non-can traffic areas;
Wherein, above-mentioned steps 3) in characteristic extraction procedure be: for the SIFT feature of any point in two field picture, with the positive dirction of this point for benchmark sets up rectangular coordinate system
x 4'
o 4'
y 4'; Calculate Harris angle point positions in a coordinate system all in image; Like this, each the SIFT feature point in image is corresponding with a coordinate system, and each Harris Corner Feature is converted into the coordinate inside different coordinates simultaneously, establishes the hierarchy of feature in this way.
Above-mentioned steps 3) in the calculating of characteristic light stream comprise the following steps:
1) SIFT-Harris compound characteristics builds; Utilize SIFT feature and Harris Corner Feature composition compound characteristics to have both the advantage of two kinds of features, make up respective deficiency.
2) cluster is carried out to SIFT feature; First, carry out cluster to SIFT feature, the feature amount threshold setting two width images match is 3, finds the SIFT feature set that two groups mate.Then, set the feature amount threshold of two width images match, find the SIFT feature set that two groups mate.
3) coupling of Harris angle point is for a certain Corner Feature in image, assuming that its coordinate polar coordinate representation in a certain SIFT feature, finds the match point satisfied condition, then votes.In order to reduce calculating, voting in units of SIFT feature group, after each takes turns poll closing, the Harris Corner Feature of Satisfying Matching Conditions (votes is greater than 3) being deleted from respective feature set, until complete all ballots.
4) all matching characteristic original positions of historical frames image are mapped in present image, the coupling then connecting all Harris Corner Features and SIFT feature is right, just morphogenesis characters light stream in present image.Suppose that video camera keeps fixing in motion process, then as certain some one_to_one corresponding of road plane in any point in plane and world coordinate system.According to historical position and the current location of light stream, the displacement in world coordinate system and the direction of this light stream just can be calculated.Because the movement velocity of car body and direction easily obtain, the displacement of stationary object and the displacement of direction and car body and direction equal and opposite in direction in environment, direction is contrary, thus the object that can move in testing environment and pedestrian.
Beneficial effect of the present invention:
The present invention establishes the hierarchical structure of SIFT feature and Harris Corner Feature, propose a kind of SIFT – COF characteristic light stream detection method, two field picture on the vehicle front road gathered in navigation procedure is analyzed, and whether can pass through to wherein area-of-interest and differentiate.The method achieve the mutual supplement with each other's advantages of SIFT feature and Harris Corner Feature, utilize the Scale invariant characteristic of SIFT feature and the uniform distribution properties of Harris Corner Feature, the light stream of dissimilar unstructured road detects the validity that contrast test shows this method.
Accompanying drawing explanation
Fig. 1 is the structure schematic diagram of SIFT-Harris hierarchical composite feature in the present invention, wherein:
Fig. 1 (a) and Fig. 1 (b) represents two frame sequence images, and Fig. 1 (c) and Fig. 1 (d) is the Dividing Characteristics structure that corresponding diagram 1 (a) and Fig. 1 (b) adopt the present invention to set up.
The cluster schematic diagram of Fig. 2 SIFT feature.
Embodiment
In road environment, it is matching problem that traditional optical flow computation runs into maximum problem.The uniform distribution properties of road gray scale makes a certain window in inter frame image often have multiple candidate window corresponding with it, and gray scale may not be match window closest to window.The present invention extracts the visual signature that can be detected repeatedly in the picture, and obviously has larger adaptability and robustness than traditional optical flow approach by the matching primitives relative displacement of this feature.
1, SIFT-Harris compound characteristics builds
According to the feature of intelligent vehicle applied environment, choosing of feature considers following factor usually: first, feature should be distributed in image uniformly, can ensure to extract comparatively uniform light stream like this; Secondly, feature should be easy to extract and coupling, is not easy the impact by neighbourhood noise; Finally, feature should possess Scale invariant and affine constant characteristic, to meet feature in the process drawn near, can not cause error hiding due to the change of yardstick.
Consider several conventional character representation method, Harris Corner Feature is evenly distributed, and the illumination change for environment has good robustness.Shortcoming is more responsive for the change of yardstick.Yardstick and affine change can cause the position of Harris angle point to change, and cause even the loss of Corner Feature.SIFT feature has good robustness for yardstick and affine change, and due to SIFT feature itself be the extreme point of difference of Gaussian image, more stable than Harris angle point.The multidimensional describer of SIFT feature makes feature be easier to coupling, and compare with Harris Corner Feature, its uniqueness " distinctive " ensure that higher matching accuracy rate.The shortcoming of SIFT feature distributes comparatively sparse, possibly cannot ensure the optical flow computation of specific ROI like this.Based on above-mentioned analysis, utilize SIFT feature and Harris Corner Feature composition compound characteristics to have both the advantage of two kinds of features, make up respective deficiency.
The hierarchy of SIFT-Harris compound characteristics is as shown in Figure 1: Fig. 1 (a) and Fig. 1 (b) represents two frame sequence images
i a ,
i b ,
s i (
s i ') represent SIFT feature,
c j (
c j ') represent Harris Corner Feature.For any point SIFT feature in image (with
s 4' be example), with
s 4' positive dirction be that benchmark sets up coordinate system
x 4'
o 4'
y 4', so Harris angle point that can be all in computed image
c j ' position in a coordinate system.Like this, each in image
s i ' corresponding with a coordinate system, simultaneously each
c j ' coordinate inside different coordinates can be calculated.Establish the hierarchy (as shown in Fig. 1 (c) and Fig. 1 (d)) of feature in this way.Although Harris angle point self is more responsive for the pantograph ratio of yardstick, but the relative position relation of Harris angle point and other angle points or SIFT feature can keep relative stability in the motion of intelligent vehicle.Although the object in image may cause the local of Harris angle point to be moved due to yardstick and affine conversion, compare with the position relationship (distance, angle) of other features, or negligible.Yardstick and affine conversion also likely can cause the loss of local Harris angle point, but with compared with the angle point quantity of stable detection, still can occupy the minority.And for a system processed in real time (10Hz), yardstick and the affine change of the characteristics of image between consecutive frame are almost negligible, and therefore the design of the hierarchy of this method is rational.
2, characteristic light stream calculates
The calculating of characteristic light stream depends on SIFT feature in sequence image and mates with the correct of Harris angle point.And the prerequisite that both correctly mate has identical motor pattern, that is to say that the relative motion between two kinds of features is almost nil.Due to the priori of lack of wisdom vehicle place environment, cannot according to deposit in advance automobile, tricycle, motorcycle, pedestrian, number or rock feature database the background in image, road and target are distinguished, can only classify according to the motor pattern of SIFT feature.
1) cluster of SIFT feature
Because SIFT feature has the unchangeability of ambient light illumination, rotation, yardstick and affined transformation, the feature race of different motion pattern is often naturally distinguished and without the need to considering the motion that feature is concrete.Be embodied as coupling and often obtain multiple possible matching result, the multiple hypotheses of correspondence image feature.The coupling on the whole of two width images can choose the maximum a certain result of number of matches usually as final matching scheme.The feature amount threshold setting two width images match is 3, then can find two groups of SIFT feature set of mating in fig. 2.First group by 4 SIFT feature of mating between two (
s 1 s 2'), (
s 2 s 4'), (
s 4 s 6') and (
s 8,
s 9') composition; Second group then by 3 SIFT feature of mating between two (
s 5 s 1'), (
s 6 s 3') and (
s 7 s 6') composition.Obviously, angle point
c 1,
c 1' remain unchanged with the relative position of second group of SIFT feature, but to change the time be the condition not meeting coupling in first group of SIFT feature.Equally, angle point
c 2,
c 2' position relationship that remains unchanged with the corresponding SIFT feature of first group, but at the matching characteristic of second group of SIFT feature containing more than 3.With (
s m(
i,
j)
,
s n(
i,
j)
'),
i=1,2,
m j,
j=1,2
nrepresent the SIFT feature of all couplings,
m(
i,
j),
n(
i,
j) represent respectively
jclass SIFT feature
ithe sequence number that individual coupling is right.
2) coupling of Harris angle point
For image
i a in a certain Corner Feature
c i , assuming that it is in a certain SIFT feature
s j in coordinate polar coordinate representation, and image
i b in Corner Feature
c k ' in a certain SIFT feature
s m ' in polar coordinates with representing, when
s j with
s m ' coupling time (suppose that this feature is to belonging to the
ngroup), if satisfied condition |-| < and |-| <, then by this SIFT feature group pair
c i ballot V
in add 1.Consider the position skew of SIFT feature and the Harris Corner Feature brought due to system noise, at that time, angle point
c i with
c k ' can coupling be identified as.If adopt the simplest directly matching process, consider
rarbitrary SIFT feature coupling in group is right, first computed image
i a in
mindividual angle point and image
i b in
nthe polar coordinates of individual angle point in corresponding SIFT feature; Whether any two angle points are mated and carries out ballot needs
mncalculate, altogether need
mnrsecondary calculating.Consider that the number of SIFT feature is little, and the quantity of SIFT feature Clustering is less, so the computation complexity of Harris corner correspondence is
o(
n 2).
In order to reduce calculating, voting in units of SIFT feature group, after each takes turns poll closing, the Harris Corner Feature of Satisfying Matching Conditions (votes is greater than 3) being deleted from respective feature set, until complete all ballots.
All matching characteristic original positions of historical frames image are mapped in present image, and the coupling then connecting all Harris Corner Features and SIFT feature is right, just morphogenesis characters light stream in present image.Suppose that video camera keeps fixing in motion process, then as certain some one_to_one corresponding of road plane in any point in plane and world coordinate system.According to historical position and the current location of light stream, the displacement in world coordinate system and the direction of this light stream just can be calculated.Because the movement velocity of car body and direction easily obtain, the displacement of stationary object and the displacement of direction and car body and direction equal and opposite in direction in environment, direction is contrary, thus the object that can move in testing environment and pedestrian.
Claims (1)
1., based on an Approach for road detection for SIFT – COF characteristic light stream, it is characterized in that comprising the following steps:
1) frame image information in sensor collection vehicle road ahead is utilized, and the road area distinguished in two field picture and non-rice habitats region;
2) the non-rice habitats region mixed in road area is defined as region of interest ROI;
3) then the feature in ROI to be extracted, and by the characteristic matching morphogenesis characters light stream of interframe, judge in the plane that this feature is positioned at road according to the calculating of characteristic light stream or be positioned at outside road plane;
Characteristic extraction procedure in ROI is: for the SIFT feature of any point in two field picture, with the positive dirction of this point for benchmark sets up rectangular coordinate system
x 4'
o 4'
y 4', calculate Harris angle point positions in a coordinate system all in image; Like this, each the SIFT feature point in image is corresponding with a coordinate system, and each Harris Corner Feature is converted into the coordinate inside different coordinates simultaneously, establishes the hierarchy of feature in this way;
The calculating of characteristic light stream comprises the following steps:
3.1) SIFT-Harris compound characteristics builds; Utilize SIFT feature and Harris Corner Feature composition compound characteristics to have both the advantage of two kinds of features, make up respective deficiency;
3.2) carry out cluster to SIFT feature: first, carry out cluster to SIFT feature, the feature amount threshold setting two width images match is 3, finds the SIFT feature set that two groups mate; Then, set the feature amount threshold of two width images match, find the SIFT feature set that two groups mate;
3.3) coupling of Harris angle point: for a certain Corner Feature in image, assuming that its coordinate polar coordinate representation in a certain SIFT feature, finds the match point satisfied condition, then votes; In order to reduce calculating, vote in units of SIFT feature group, after each takes turns poll closing, Satisfying Matching Conditions, the Harris Corner Feature that namely votes is greater than 3 is deleted, until complete all ballots from respective feature set;
3.4) all matching characteristic original positions of historical frames image are mapped in present image, the coupling then connecting all Harris Corner Features and SIFT feature is right, just morphogenesis characters light stream in present image; Suppose that video camera keeps fixing in motion process, then as certain some one_to_one corresponding of road plane in any point in plane and world coordinate system; According to historical position and the current location of light stream, the displacement in world coordinate system and the direction of this light stream just can be calculated; Because the movement velocity of car body and direction easily obtain, the displacement of stationary object and the displacement of direction and car body and direction equal and opposite in direction in environment, direction is contrary, thus the object that can move in testing environment and pedestrian;
4) whether was static feature according to vehicle in this feature of motion determination in a upper moment, the static and feature region be positioned on road plane will be regarded as can traffic areas, otherwise will be regarded as non-can traffic areas.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310218603.4A CN103279748B (en) | 2013-06-04 | 2013-06-04 | A kind of Approach for road detection based on SIFT – COF characteristic light stream |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310218603.4A CN103279748B (en) | 2013-06-04 | 2013-06-04 | A kind of Approach for road detection based on SIFT – COF characteristic light stream |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103279748A CN103279748A (en) | 2013-09-04 |
CN103279748B true CN103279748B (en) | 2016-04-20 |
Family
ID=49062263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310218603.4A Active CN103279748B (en) | 2013-06-04 | 2013-06-04 | A kind of Approach for road detection based on SIFT – COF characteristic light stream |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103279748B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9595096B2 (en) * | 2014-03-10 | 2017-03-14 | The Boeing Company | Composite inspection and structural check of multiple layers |
CN104700072B (en) * | 2015-02-06 | 2018-01-19 | 中国科学院合肥物质科学研究院 | Recognition methods based on lane line historical frames |
CN105825523B (en) * | 2016-03-11 | 2019-07-19 | 南京航空航天大学 | A kind of quick mutative scale runway tracking towards fixed-wing UAV Landing |
CN110288050B (en) * | 2019-07-02 | 2021-09-17 | 广东工业大学 | Hyperspectral and LiDar image automatic registration method based on clustering and optical flow method |
CN112734817A (en) * | 2021-01-15 | 2021-04-30 | 北京眸星科技有限公司 | Image registration method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441076A (en) * | 2008-12-29 | 2009-05-27 | 东软集团股份有限公司 | Method and device for detecting barrier |
-
2013
- 2013-06-04 CN CN201310218603.4A patent/CN103279748B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441076A (en) * | 2008-12-29 | 2009-05-27 | 东软集团股份有限公司 | Method and device for detecting barrier |
Non-Patent Citations (4)
Title |
---|
Dominant plane detection from optical flow for robot navigation;Naoya Ohnishi等;《Pattern Recognition Letters》;20061231;全文 * |
基于特征点光流的车辆检测;徐蕾等;《重庆理工大学学报(自然科学)》;20110930(第09期);第6-10页 * |
影像追踪系统及其在车辆安全之应用;杨智杰;《国立交通大学硕士学位论文》;20041231;第19-23页 * |
车辆视频检测感兴趣区域确定算法;徐国艳等;《北京航空航天大学学报》;20100731;第36卷(第7期);第781-784页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103279748A (en) | 2013-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108320510B (en) | Traffic information statistical method and system based on aerial video shot by unmanned aerial vehicle | |
CN110738121A (en) | front vehicle detection method and detection system | |
CN105701479B (en) | Intelligent vehicle multilasered optical radar fusion identification method based on target signature | |
CN102810250B (en) | Video based multi-vehicle traffic information detection method | |
CN101701818B (en) | Method for detecting long-distance barrier | |
CN103117005B (en) | Lane deviation warning method and system | |
CN103279748B (en) | A kind of Approach for road detection based on SIFT – COF characteristic light stream | |
Yuan et al. | Robust lane detection for complicated road environment based on normal map | |
Liu et al. | A survey of vision-based vehicle detection and tracking techniques in ITS | |
CN109460709A (en) | The method of RTG dysopia analyte detection based on the fusion of RGB and D information | |
Zho et al. | Reconstructing urban 3D model using vehicle-borne laser range scanners | |
CN106256606A (en) | A kind of lane departure warning method based on vehicle-mounted binocular camera | |
Li et al. | Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features | |
CN102044151A (en) | Night vehicle video detection method based on illumination visibility identification | |
CN106156752B (en) | A kind of model recognizing method based on inverse projection three-view diagram | |
CN115717894B (en) | Vehicle high-precision positioning method based on GPS and common navigation map | |
CN107886752B (en) | A kind of high-precision vehicle positioning system and method based on transformation lane line | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
Behrendt et al. | Deep learning lane marker segmentation from automatically generated labels | |
CN103632376A (en) | Method for suppressing partial occlusion of vehicles by aid of double-level frames | |
He et al. | Lane-level street map extraction from aerial imagery | |
CN116978009A (en) | Dynamic object filtering method based on 4D millimeter wave radar | |
Xu et al. | A real-time complex road AI perception based on 5G-V2X for smart city security | |
Wang et al. | An improved hough transform method for detecting forward vehicle and lane in road | |
CN113538585A (en) | High-precision multi-target intelligent identification, positioning and tracking method and system based on unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210310 Address after: 210049 Sanbao science and Technology Park, 10 Ma Qun Road, Qixia District, Nanjing, Jiangsu Patentee after: JIANGSU INTELLITRAINS Co.,Ltd. Address before: 210049 Sanbao science and Technology Park, 10 Ma Qun Road, Qixia District, Nanjing, Jiangsu Patentee before: NANJING SAMPLE TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right |