CN106570495A - Road detection method under complex environment - Google Patents

Road detection method under complex environment Download PDF

Info

Publication number
CN106570495A
CN106570495A CN201611031981.1A CN201611031981A CN106570495A CN 106570495 A CN106570495 A CN 106570495A CN 201611031981 A CN201611031981 A CN 201611031981A CN 106570495 A CN106570495 A CN 106570495A
Authority
CN
China
Prior art keywords
image
road
texture
vehicle
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201611031981.1A
Other languages
Chinese (zh)
Inventor
陈锡清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanning Haofa Technology Co Ltd
Original Assignee
Nanning Haofa Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanning Haofa Technology Co Ltd filed Critical Nanning Haofa Technology Co Ltd
Priority to CN201611031981.1A priority Critical patent/CN106570495A/en
Publication of CN106570495A publication Critical patent/CN106570495A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a road detection method under a complex environment. The method comprises the following steps of S1 acquiring the pavement information of a driving road real-timely via a camera arranged inside a vehicle, segmenting an obtained vehicle driving video into a series of image sequences according to frames, and selecting the images to pre-process according to certain selection rules; S2 segmenting the pre-processed images via the image textures, and dividing into a plurality of different areas having the similar textures and structures; S3 selecting two symmetrical texture areas in front of the driving direction of the vehicle in the image road areas having the similar textures, carrying out the low order decomposition, and obtaining the image low order texture directions and the extension lines of the road areas; S4 merging the images and finally obtaining the vanishing points of the images; S5 carrying out the significance detection on the images while carrying out the step 2, and obtaining the significance targets of the images; S6 determining the driving direction and the change trend of the vehicle by combining the detected vanishing points.

Description

A kind of Approach for road detection under complex environment
Technical field
Present invention relates particularly to the Approach for road detection under a kind of complex environment.
Background technology
With the extensive application and the continuous development of social economy of science and technology, at the same the continuous research and development of high precision electronic equipment and Update, the reduction of price its purposes also becomes more extensive.Under the premise of this development with rapid changepl. never-ending changes and improvements, people are also increasingly Focus on comfortable, quick and safety during trip.Therefore increasing people pay close attention to the intellectuality of vehicle, it is desirable to be able to obtain More helpful system come easily gone on a journey and avoid accident occur harm.
Intelligent Vehicle System needs many technologies, such as video acquisition, Road Detection, data processing etc..As intelligence The basic core technology of energy vehicle driving system, with the in-depth of research, the development of lane detection technology also becomes more important, Therefore, the research work of lane detection technology becomes the important theme of computer vision research, particularly in automatic Pilot, automobile Anti-collision warning and pedestrian detection aspect have great application, and the main purpose of lane detection technology is by searching vehicle periphery ring Border, searching is available for the region of vehicle pass-through, including vehicle travel direction in the road and passing road region, and job applications exist Unmanned technology under outdoor true environment is affected by all multi-environments so that the application of Approach for road detection is also received Huge restriction, in order to more preferably obtain lane detection technology, obtains more preferable testing result, more advanced and practical method Us are needed constantly to update and explore.Road Detection is subject to more multifactorial impact under complex environment so that much calculated Method is severely impacted.
The content of the invention
The technical problem to be solved in the present invention is to provide the Approach for road detection under a kind of complex environment.
Approach for road detection under complex environment, comprises the following steps:
S1:Travel information of road surface is gathered in real time by the video camera being installed in vehicle, the vehicle row that will be obtained Sailing video and be divided into a series of image sequence according to frame, and choose image according to certain selection rule carries out pretreatment;
S2:By pretreated image by Image Texture Segmentation, several similar differences of texture and structure are divided into Region;
S3:In the similar image road area of texture, two pieces of symmetrical texture areas in front of vehicle traveling are selected, entered Row low-rank sequence is decomposed, and obtains the image low-rank sequence grain direction and its extended line of road area;
S4:Image is merged and processes the end point for finally giving image;
S5:With simultaneously, carry out significance detection to image to step S2, obtain the significance target of image;
S6:With reference to end point detected in step S4, direction and the variation tendency of vehicle traveling are determined.
Further, described image Texture Segmentation Methods have as follows:
S2-1:First with Matlab algorithms by imageCarrying out L level wavelet decomposition is:
Wherein, H and G represent low pass filter and high pass filter, and r and c represents row and column, l ∈ { 1,2 ..., L };
Average is:
Variance is:
S2-2:Then the above-mentioned each dimension textural characteristics tried to achieve are smoothed with quartering, according to average and variance Computing formula, takes off in l fraction, obtain tetra- neighborhood subwindow image-regions of w × w centered on (x, y) average and Variance, can obtain minimum variance is:
S2-3:Finally k mean clusters are carried out to these features using the data structure of kd trees:
1) first, build one and build a y-bend kd tree, for depositing multidimensional clustering data, L levels are carried out to original image Wavelet decomposition, as L=2, in each position of exploded view pictures at different levels 4 D feature vectors is formed Image after the second level is decomposed both horizontally and vertically is extending 2 times, constitutes 8 D feature vectors
2) new cluster centre, fixed clusters number k (k >=2) and cluster centre Z0 secondly, are calculated, n=1 is initialized, it is right Each node u, the cluster centre set Z that the step of distance (n-1)th is formedn-1The nearest center in midpoint be:
Assume that C is the minimum box comprising all data, CminRepresent minima of the C along segmentation dimension, CmaxRepresent C edges point The maximum of dimension is cut, if | | z-C | | > | | are Z*-C||,z∈Zn-1\{Z*, then z is filtered away, constituting new cluster centre is:
Wherein,WithRepresent respectively apart from the linear and corresponding of the nearest data point in ith cluster center The number of data point, iteration is filtered process until cluster centre no longer changes, and segmentation effect is finally obtained good Texture image.
Further, the significance detection concrete grammar is as follows:
S5-1:Fall into a trap in Lab space first the pixel average of nomogram picture;
S5-2:Then Gaussian Blur is carried out to image, grain details and noise in image are eliminated, DoG wave filter is such as Under:
Wherein, σ12It is that Gauss standard is poor;
S5-3:Finally using formula S (x, y)=| | Iv-Iwhc(x, y) | | the salient region of image is obtained, wherein, S is Saliency maps, IvBe image color and brightness vector, IwhcIt is corresponding Gaussian Blur image pixel vector value, | | | | It is L2Norm.
Texture Segmentation is a kind of means that region segmentation cluster is carried out using textural characteristics, mainly divides an image into one Finite region collection of the group with relatively uniform textural characteristics, including texture feature extraction, Texture Boundaries are processed and textural characteristics divide Class etc..The purpose of Texture Segmentation is for the computer vision in image procossing and image understanding service.In the picture, texture is carried out Polylith texture block is selected in same or analogous texture region after segmentation, grain direction can be obtained after feature extraction, be disappearance The searching of point provides processing method more conveniently and quickly.
Grain direction is the basic feature that texture image has, and it is the concept on a region, any only The vertical pixel that exists is that do not have directive, and only statistical nature just can determine that the directivity of texture in a big neighborhood.Texture Directivity have very strong realistic meaning, can predict satellite cloud picture change direction and geologic erosion phenomenon occur direction.
End point, according to perspective projection principle, will when the parallel lines in three dimensions is mapped in image projection plane Intersect at a point, as end point.End point has contained the directional information of straight line, by the analyzing and processing to end point, energy The three dimensional structure and directional information of scene are enough effectively obtained, contributes to the understanding to scene.
Any image all has different textures, because the particularity of road causes structural road and unstructuredness road All there is certain directivity, the objectively research for end point provides foundation.Grain direction is the end point of image Research and lookup provide simpler and quick method.In the picture in order to obtain end point, two width had matrix Picture in low-rank grain direction can in the picture obtain end point when synthesizing a width picture.At in image selected two The low-rank grain direction in region, finds the grain direction line extended line of each image chosen area, after merging two width images, extends Line intersects and a bit, i.e. end point.
The vision significance object of image refers to what is can found rapidly in a large amount of visual informations of image, to vision It is the subject of " interested " and " meaningful " for system.The Selective attention of vision is that visual system cannot be parallel Process magnanimity visual information, it is not required that milli is nondistinctive to process all information.
In vehicle traveling is driven, vehicle, pedestrian and its barrier for driving front side is the target that people note the most. The image being transformed from video, we can analyze, and the saliency object in image is exactly that we will in driving procedure Note the target for avoiding.
The invention has the beneficial effects as follows:
Using grain direction as road vanishing Point Detection Method foundation, in identical texture region, texture has similar Low-rank grain direction, the foundation that we judge the grain direction of image as end point, the end point between image becomes Change and move, so that it is determined that whether the direction of vehicle traveling changes;Using the saliency object in image as barrier, and disappearing Lose and determine whether driving vehicle needs to change direction on point direction;With reference to end point direction change and vehicle travel when road Significantly implementations, reasonably adjust the travel direction of vehicle, it is ensured that the operating stability of vehicle.
Specific embodiment
The present invention is further elaborated for specific examples below, but not as a limitation of the invention.
Approach for road detection under complex environment, comprises the following steps:
S1:Travel information of road surface is gathered in real time by the video camera being installed in vehicle, the vehicle row that will be obtained Sailing video and be divided into a series of image sequence according to frame, and choose image according to certain selection rule carries out pretreatment;
S2:By pretreated image by Image Texture Segmentation, several similar differences of texture and structure are divided into Region;
S3:In the similar image road area of texture, two pieces of symmetrical texture areas in front of vehicle traveling are selected, entered Row low-rank sequence is decomposed, and obtains the image low-rank sequence grain direction and its extended line of road area;
S4:Image is merged and processes the end point for finally giving image;
S5:With simultaneously, carry out significance detection to image to step S2, obtain the significance target of image;
S6:With reference to end point detected in step S4, direction and the variation tendency of vehicle traveling are determined.
Described image Texture Segmentation Methods have as follows:
S2-1:First with Matlab algorithms by imageCarrying out L level wavelet decomposition is:
Wherein, H and G represent low pass filter and high pass filter, and r and c represents row and column, l ∈ { 1,2 ..., L };
Average is:
Variance is:
S2-2:Then the above-mentioned each dimension textural characteristics tried to achieve are smoothed with quartering, according to average and variance Computing formula, takes off in l fraction, obtain tetra- neighborhood subwindow image-regions of w × w centered on (x, y) average and Variance, can obtain minimum variance is:
S2-3:Finally k mean clusters are carried out to these features using the data structure of kd trees:
1) first, build one and build a y-bend kd tree, for depositing multidimensional clustering data, L levels are carried out to original image Wavelet decomposition, as L=2, in each position of exploded view pictures at different levels 4 D feature vectors is formed Image after the second level is decomposed both horizontally and vertically is extending 2 times, constitutes 8 D feature vectors
2) new cluster centre, fixed clusters number k (k >=2) and cluster centre Z secondly, are calculated0, n=1 is initialized, it is right Each node u, the cluster centre set Z that the step of distance (n-1)th is formedn-1The nearest center in midpoint be:
Assume that C is the minimum box comprising all data, CminRepresent minima of the C along segmentation dimension, CmaxRepresent C edges point The maximum of dimension is cut, if | | z-C | | > | | are Z*-C||,z∈Zn-1\{Z*, then z is filtered away, constituting new cluster centre is:
Wherein,WithRepresent respectively apart from the linear and corresponding of the nearest data point in ith cluster center The number of data point, iteration is filtered process until cluster centre no longer changes, and segmentation effect is finally obtained good Texture image.
The significance detection concrete grammar is as follows:
S5-1:Fall into a trap in Lab space first the pixel average of nomogram picture;
S5-2:Then Gaussian Blur is carried out to image, grain details and noise in image are eliminated, DoG wave filter is as follows:
Wherein, σ12It is that Gauss standard is poor;
S5-3:Finally using formula S (x, y)=| | Iv-Iwhc(x, y) | | the salient region of image is obtained, wherein, S is Saliency maps, IvBe image color and brightness vector, IwhcIt is corresponding Gaussian Blur image pixel vector value, | | | | It is L2Norm.

Claims (3)

1. the Approach for road detection under a kind of complex environment, it is characterised in that comprise the following steps:
S1:Travel information of road surface is gathered in real time by the video camera being installed in vehicle, the vehicle traveling of acquisition is regarded Frequency is divided into a series of image sequence according to frame, and carries out pretreatment according to certain selection rule selection image;
S2:By pretreated image by Image Texture Segmentation, several similar zoness of different of texture and structure are divided into;
S3:In the similar image road area of texture, two pieces of symmetrical texture areas in front of vehicle traveling are selected, carried out low Order is decomposed, and obtains the image low-rank sequence grain direction and its extended line of road area;
S4:Image is merged and processes the end point for finally giving image;
S5:With simultaneously, carry out significance detection to image to step S2, obtain the significance target of image;
S6:With reference to end point detected in step S4, direction and the variation tendency of vehicle traveling are determined.
2. Approach for road detection according to claim 1, it is characterised in that described image Texture Segmentation Methods have such as Under:
S2-1:First with Matlab algorithms by imageCarrying out L level wavelet decomposition is:
I l + 1 0 = H c H r I l 0
I l + 1 1 = G c H r I l 0
I l + 1 2 = H c G r I l 0
I l + 1 3 = G c G r I l 0
Wherein, H and G represent low pass filter and high pass filter, and r and c represents row and column, l ∈ { 1,2 ..., L };
Average is:
Variance is:
S2-2:Then the above-mentioned each dimension textural characteristics tried to achieve are smoothed with quartering, are calculated according to average and variance Formula, takes off in l fraction, the average and variance of w × w tetra- neighborhood subwindow image-region of the acquisition centered on (x, y), Minimum variance can be obtained is:
V min k ( x , y ) = min { V l , k e ( x , y ) , e = 1 , 2 , 3 , 4 } ;
S2-3:Finally k mean clusters are carried out to these features using the data structure of kd trees:
1) first, build one and build a y-bend kd tree, for depositing multidimensional clustering data, L level small echos are carried out to original image Decompose, as L=2, in each position of exploded view pictures at different levels 4 D feature vectors are formedWill Image after the second level is decomposed both horizontally and vertically is extending 2 times, constitutes 8 D feature vectors
2) new cluster centre, fixed clusters number k (k >=2) and cluster centre Z secondly, are calculated0, n=1 is initialized, to each Node u, the cluster centre set Z that the step of distance (n-1)th is formedn-1The nearest center in midpoint be:
Z * = min { | | Z i n - 1 - ( C min + C m a x ) / 2 | | 2 } , i = 1 , 2 , ... , k ,
Assume that C is the minimum box comprising all data, CminRepresent minima of the C along segmentation dimension, CmaxRepresent C along segmentation dimension Several maximum, if | | z-C | | > | | are Z*-C||,z∈Zn-1\{Z*, then z is filtered away, constituting new cluster centre is:
Z i n = Z i n · L S / Z i n · c o u n t , i = 1 , 2 , ... , k ,
Wherein,WithThe linear and corresponding data apart from the nearest data point in ith cluster center is represented respectively The number of point, iteration is filtered process until cluster centre no longer changes, and the good texture of segmentation effect is finally obtained Image.
3. Approach for road detection according to claim 1, it is characterised in that the significance detection concrete grammar is as follows:
S5-1:Fall into a trap in Lab space first the pixel average of nomogram picture;
S5-2:Then Gaussian Blur is carried out to image, grain details and noise in image are eliminated, DoG wave filter is as follows:
D o G ( x , y ) = 1 2 π [ 1 σ 1 2 e ( x 2 + y 2 ) 2 σ 1 2 - 1 σ 2 2 e ( x 2 + y 2 ) 2 σ 2 2 ] = G ( x , y , σ 1 ) - G ( x , y , σ 2 )
Wherein, σ12It is that Gauss standard is poor;
S5-3:Finally using formula S (x, y)=| | Iv-Iwhc(x, y) | | the salient region of image is obtained, wherein, S is notable Property figure, IvBe image color and brightness vector, IwhcIt is corresponding Gaussian Blur image pixel vector value, | | | | it is L2 Norm.
CN201611031981.1A 2016-11-19 2016-11-19 Road detection method under complex environment Withdrawn CN106570495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611031981.1A CN106570495A (en) 2016-11-19 2016-11-19 Road detection method under complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611031981.1A CN106570495A (en) 2016-11-19 2016-11-19 Road detection method under complex environment

Publications (1)

Publication Number Publication Date
CN106570495A true CN106570495A (en) 2017-04-19

Family

ID=58542968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611031981.1A Withdrawn CN106570495A (en) 2016-11-19 2016-11-19 Road detection method under complex environment

Country Status (1)

Country Link
CN (1) CN106570495A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862864A (en) * 2017-10-18 2018-03-30 南京航空航天大学 Driving cycle intelligent predicting method of estimation based on driving habit and traffic
CN107909047A (en) * 2017-11-28 2018-04-13 上海信耀电子有限公司 A kind of automobile and its lane detection method and system of application
CN110658353A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Method and device for measuring speed of moving object and vehicle
CN115797631A (en) * 2022-12-01 2023-03-14 复亚智能科技(太仓)有限公司 Road range 1+1 dividing method in different driving directions

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050681A (en) * 2014-07-04 2014-09-17 哈尔滨工业大学 Road vanishing point detection method based on video images
CN104217438A (en) * 2014-09-19 2014-12-17 西安电子科技大学 Image significance detection method based on semi-supervision
CN104700071A (en) * 2015-01-16 2015-06-10 北京工业大学 Method for extracting panorama road profile
CN106080218A (en) * 2016-07-01 2016-11-09 蔡雄 One can independent navigation cruiser
CN106127178A (en) * 2016-07-01 2016-11-16 蔡雄 A kind of unmanned fire fighting truck

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050681A (en) * 2014-07-04 2014-09-17 哈尔滨工业大学 Road vanishing point detection method based on video images
CN104217438A (en) * 2014-09-19 2014-12-17 西安电子科技大学 Image significance detection method based on semi-supervision
CN104700071A (en) * 2015-01-16 2015-06-10 北京工业大学 Method for extracting panorama road profile
CN106080218A (en) * 2016-07-01 2016-11-09 蔡雄 One can independent navigation cruiser
CN106127178A (en) * 2016-07-01 2016-11-16 蔡雄 A kind of unmanned fire fighting truck

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
侯艳丽等: "基于小波变换和kd树聚类的快速纹理分割算法", 《计算机应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862864A (en) * 2017-10-18 2018-03-30 南京航空航天大学 Driving cycle intelligent predicting method of estimation based on driving habit and traffic
CN107909047A (en) * 2017-11-28 2018-04-13 上海信耀电子有限公司 A kind of automobile and its lane detection method and system of application
CN110658353A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Method and device for measuring speed of moving object and vehicle
CN115797631A (en) * 2022-12-01 2023-03-14 复亚智能科技(太仓)有限公司 Road range 1+1 dividing method in different driving directions
CN115797631B (en) * 2022-12-01 2023-12-01 复亚智能科技(太仓)有限公司 Road range 1+1 segmentation method for different driving directions

Similar Documents

Publication Publication Date Title
CN103413308B (en) A kind of obstacle detection method and device
CN106570495A (en) Road detection method under complex environment
Zhang et al. A longitudinal scanline based vehicle trajectory reconstruction method for high-angle traffic video
DE102019118999A1 (en) LIDAR-BASED OBJECT DETECTION AND CLASSIFICATION
CN107292266B (en) Vehicle-mounted pedestrian area estimation method based on optical flow clustering
CN110491132A (en) Vehicle based on video frame picture analyzing, which is disobeyed, stops detection method and device
CN111915583B (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
Feniche et al. Lane detection and tracking for intelligent vehicles: A survey
Moghimi et al. Moving vehicle detection using AdaBoost and haar-like feature in surveillance videos
Chu et al. Enhanced ground segmentation method for Lidar point clouds in human-centric autonomous robot systems
Nguyen et al. Real-time vehicle detection using an effective region proposal-based depth and 3-channel pattern
Huang et al. Stereovision-based object segmentation for automotive applications
CN115019043A (en) Image point cloud fusion three-dimensional target detection method based on cross attention mechanism
Niknejad et al. Occlusion handling using discriminative model of trained part templates and conditional random field
CN114359876A (en) Vehicle target identification method and storage medium
CN107220632B (en) Road surface image segmentation method based on normal characteristic
CN110443142B (en) Deep learning vehicle counting method based on road surface extraction and segmentation
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
Jang et al. Object classification using CNN for video traffic detection system
FAN et al. Robust lane detection and tracking based on machine vision
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
CN116681932A (en) Object identification method and device, electronic equipment and storage medium
Younis et al. Accelerated fog removal from real images for car detection
Mehboob et al. Video surveillance-based intelligent traffic management in smart cities
CN105118071A (en) Novel video tracking method based on self-adaptive partitioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20170419