CN106778551B - Method for identifying highway section and urban road lane line - Google Patents

Method for identifying highway section and urban road lane line Download PDF

Info

Publication number
CN106778551B
CN106778551B CN201611084618.6A CN201611084618A CN106778551B CN 106778551 B CN106778551 B CN 106778551B CN 201611084618 A CN201611084618 A CN 201611084618A CN 106778551 B CN106778551 B CN 106778551B
Authority
CN
China
Prior art keywords
lane
model
image
lane line
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611084618.6A
Other languages
Chinese (zh)
Other versions
CN106778551A (en
Inventor
成剑
沙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201611084618.6A priority Critical patent/CN106778551B/en
Publication of CN106778551A publication Critical patent/CN106778551A/en
Application granted granted Critical
Publication of CN106778551B publication Critical patent/CN106778551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for identifying lane lines of highway sections and urban roads, which comprises the following steps: acquiring a lane image through a camera, and acquiring internal and external parameters of the camera by using a camera parameter self-calibration method to obtain a lane plane vanishing line; carrying out gray processing on the lane image and dividing an interested area; carrying out median filtering on the image obtained in the last step; extracting the texture features of the lane lines from the image after median filtering by Gabor transformation; describing lane line edge characteristics by adopting multi-angle Haar characteristics, and performing classification and identification by using an Adaboost classifier; designing a hyperbolic curve combined model aiming at the structured road; and estimating parameters of the hyperbolic curve combined model by using an improved Randac algorithm. The invention solves the problem of lane line detection in complex environments with shadows, lane line abrasion, bad weather and the like, and has better real-time performance.

Description

Method for identifying highway section and urban road lane line
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method for identifying lane lines of highway sections and urban roads.
Background
Lane line identification is a core part of advanced driving assistance systems. Because the vision sensor has low cost and is easy to construct a system, the lane line detection method based on machine vision is widely applied. The lane line detection method based on machine vision can be generally divided into two categories: image feature based methods and model identification based methods.
The image feature-based method is mainly the identification of gray scale edges in gray scale images, and the model-based method is to establish a mathematical model to represent lane boundaries. Model identification-based methods are commonly used in urban streets and highways, and common lane line detection models include straight line models, hyperbolic models, parabolic models, spline curve models, and the like. Simple models do not represent lane lines well, while complex models have complex calculations and high error rates.
The existing method can have a good detection effect in various scenes, but in poor conditions of lane lines, such as shadow, trees, illumination intensity, lane line abrasion and the like, the non-lane line characteristic points are often identified as lane line characteristic points during detection, so that parameter estimation deviation is caused.
Disclosure of Invention
The invention aims to provide a method for identifying lane lines of highway sections and urban roads, which solves the problem that the lane lines are difficult to detect due to interference of various factors and has real-time performance.
The technical scheme for realizing the purpose of the invention is as follows: a method for identifying highway sections and urban road lane lines comprises the following steps:
step 1, acquiring a lane image through a camera, and obtaining internal and external parameters of the camera by using a camera parameter self-calibration method to obtain a lane plane vanishing line;
step 2, carrying out gray processing on the lane image and dividing an interested area;
step 3, performing median filtering on the image obtained in the step 2;
step 4, extracting the texture features of the lane lines from the image after median filtering by Gabor transformation;
step 5, describing lane line edge characteristics by adopting multi-angle Haar characteristics, and performing classification and identification by using an Adaboost classifier;
step 6, designing a hyperbolic curve combination model aiming at the structured road;
and 7, estimating parameters of the hyperbolic curve combined model by using an improved Randac algorithm.
Compared with the prior art, the invention has the following remarkable effects:
(1) according to the method, the characteristic points of the lane line are extracted by utilizing the multi-angle Haar characteristics, so that the edge characteristics of the lane line with strong direction consistency are enhanced;
(2) the invention utilizes Haar characteristics to combine with the improved recognition algorithm of the Adaboost classifier, so that the recognition accuracy is higher and the real-time performance is better;
(3) the method better adapts to the estimation of the lane line model parameters under the complex condition by utilizing the improved Randac algorithm, improves the accuracy and enhances the real-time property.
Drawings
Fig. 1 is a flowchart of the highway section and urban road lane line identification method of the present invention.
FIG. 2 is a schematic diagram of designing a multi-angle Haar feature.
Fig. 3 is a hyperbolic composite model.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and embodiments.
With reference to fig. 1, the method for identifying lane lines of highway sections and urban roads of the invention comprises the following steps:
step 1, acquiring a lane image through a camera, and obtaining internal and external parameters of the camera by using a camera parameter self-calibration method to obtain a lane plane vanishing line;
step 2, carrying out gray processing on the lane image and dividing an interested area;
step 3, performing median filtering on the image obtained in the step 2;
step 4, extracting the texture features of the lane lines from the image after median filtering by Gabor transformation;
step 5, describing lane line edge characteristics by adopting multi-angle Haar characteristics, and performing classification and identification by using an Adaboost classifier;
step 6, designing a hyperbolic curve combination model aiming at the structured road;
and 7, estimating parameters of the hyperbolic curve combined model by using an improved Randac algorithm.
Further, in step 1, firstly, an image coordinate system and a world coordinate system are established, a point coordinate corresponding relation between the two coordinate systems is designed, internal parameters are determined according to a calibration process of the camera, position parameters of the lane line in space coordinates are calculated according to the coordinate corresponding relation, and then the position of the lane vanishing line is obtained according to the internal and external parameters.
Further, in step 2, the pixel ratio of the acquired RGB image is 5: 4: the 1 weight is subjected to graying processing.
Further, a region of interest was set below the vanishing line, and 1/3 below the vanishing line was set as a region of interest i, and the remaining 2/3 regions were set as a region of interest ii.
Further, the specific process of step 5 is as follows:
step 5-1, adding a 30-degree inclined characteristic and a 60-degree inclined characteristic on the basis that the Haar characteristics use 0 degrees, 90 degrees and 45 degrees, and arranging non-inclined rectangles on the peripheries of the 30-degree, 45-degree and 60-degree inclined rectangles, wherein four sides of the non-inclined rectangles pass through four vertexes of the inclined rectangles;
assuming a window of L× W pixels, Haar features H (x, y, l, W, α), vertex coordinates (x, y), rectangular region length l, width W, and tilt angle α, the equations for l and W for the rotated α rectangle are as follows:
Figure BDA0001167721590000031
a is the length of the rotating rectangle, and b is the width;
by using
Figure BDA0001167721590000032
The integer obtained by the rounding is used as a scaling coefficient to obtain the Haar characteristic number of the inclined α rectangle, and a formula is utilized
Figure BDA0001167721590000033
Calculating the number of Haar feature rectangles containing lane line edge features, wherein
Figure BDA0001167721590000034
RecSum (x, y) is an integral chart calculation formula;
step 5-2, setting the weight of the weak classifier based on an AD-Adaboost algorithm by using a parameter calculation formula, wherein the weight is related to not only the false alarm rate but also the identification capability of the positive sample;
given a training sample set (x)1,y1),(x2,y2),...,(xN,yN) Wherein y isiFor negative and positive samples of 0 and 1, the weights are initialized to W for the negative and positive samples, respectively1(i) 1/2q,1/2p, q and p being negative and positive samples, respectively; the algorithm constructs a strong classifier through T cycles, wherein For T is 1, 2.T, calculating the weak classifier hjClassification error of
Figure BDA0001167721590000035
Selecting current classification errortMinimum weak classifier Ht
To HtCalculating the sum of positive weights:
Figure BDA0001167721590000036
updating the weight:
Figure BDA0001167721590000037
wherein the weak classifier weight parameter
Figure BDA0001167721590000038
Normalizing the weight of the next cycle
Figure BDA0001167721590000041
In the formula, ZtIs a normalization factor that is a function of,
Figure BDA0001167721590000042
combining the obtained t weak classifiers into a strong classifier to obtain a strong classifier
Figure BDA0001167721590000043
Where θ is the discrimination threshold of the classification error rate, αt=-logt
When h (x) is 1, the pixel point x is a lane line feature point, and when h (x) is 0, x is not a lane line feature point, and in the image f (x, y), the pixel points whose values are h (x) and 0 are removed, and the remaining pixel points are lane line feature points.
Further, in step 7, a method of improving the ranac algorithm and adopting a pre-detection method to improve the operation speed is specifically as follows:
finding 4 points each time, wherein 3 points are used for designing a model to obtain model parameters, remaining one point for model matching, and judging whether the point is on the model or not, if not, giving up the sample for reselection, and if so, taking the model as a candidate model;
continuously searching other points for model matching, and if the number of points supporting the candidate model is more than or equal to a set threshold value, the candidate model is the target model; and if the number of points supporting the candidate model is less than the set threshold value, abandoning the candidate model and continuously searching for 4 points.
The present invention will be further described with reference to the following specific examples.
Examples
With reference to fig. 1, a method for identifying highway sections and urban road lane lines includes the following steps:
step 1, acquiring a road image through a camera, and calculating to obtain related parameters
The coordinates of the image obtained by the camera are (u, v), the coordinates of the map on the road surface coordinate system are (x, y), the camera is placed at the height h from the ground, and the deflection angle is
Figure BDA0001167721590000044
The coordinate of the point P in the road coordinate system is [ x, y, z ]]TAnd the coordinates of the P point in the image coordinate system are [ wu, wv, w [ ]]TThe corresponding relationship is
Figure BDA0001167721590000045
Where K is the camera calibration matrix and where,
Figure BDA0001167721590000046
r is a rotation matrix, and R is a rotation matrix,
Figure BDA0001167721590000047
i is an identity matrix, [ I ]3×3|-T]Is a cascade of I and T, [0,0, h ═]TThe z coordinate of the road surface in the road surface coordinate system is 0, (u)c,vc) The position of a principal point of an image coordinate plane is shown, f is the effective focal length of parameters in the camera, and the coordinate relation of the point projection of the image in a world coordinate system is as follows:
Figure BDA0001167721590000051
Figure BDA0001167721590000052
and obtaining the lane plane vanishing line after obtaining the internal and external parameters.
Step 2, adopting 5: 4: the weight of 1 is subjected to graying processing, interest division is carried out below a lane plane vanishing line, an 1/3 area serves as an interest area I, and a 2/3 area serves as an interest area II, as shown in fig. 3.
Step 3, median filtering is carried out on the image;
in order to remove salt and pepper noise of an image acquired by a camera and effectively protect details and edges, the invention adopts median filtering to carry out denoising treatment:
Figure BDA0001167721590000053
n is an even number
The present embodiment selects 5 × 5 median filtering as the image enhancement method.
And 4, extracting the texture features of the lane lines from the preprocessed image by Gabor transformation.
Because the Gabor filter is sensitive to the edge of an image, can provide good direction selection and scale selection characteristics, is insensitive to illumination change, and can provide good adaptability to the illumination change, the method utilizes Gabor wavelet to extract the texture characteristics of the lane line:
Figure BDA0001167721590000054
wherein
Figure BDA0001167721590000055
Represents the coordinates, sigma is the standard deviation,
Figure BDA0001167721590000056
which represents the center frequency of the filter and,
Figure BDA0001167721590000057
the direction of the kernel function is represented as,
Figure BDA0001167721590000058
is a two-dimensional Gabor filter function;
the road texture characteristics are obtained by convolving the image with a Gabor filter in an airspace:
Figure BDA0001167721590000059
Fgfor the filtered image, J is the image after graying, and G is a Gabor filter.
And 5, describing the edge and the structural characteristics of the lane line by adopting multi-angle Haar characteristics, and classifying and identifying by using an Adaboost classifier.
The invention adds a 30-degree inclined characteristic and a 60-degree inclined characteristic on the basis of using 0 degrees, 90 degrees and 45 degrees of traditional Haar characteristics, arranges non-inclined rectangles at the peripheries of the 30-degree, 45-degree and 60-degree inclined rectangles, and the four sides of the non-inclined rectangles pass through four vertexes of the inclined rectangles, as shown in FIG. 2, a window is provided with L× W pixels, the Haar characteristics H (x, y, l, W, α), the vertex coordinates are (x, y), the length of the rectangular area is l, the width is W, the inclination angle is α, and then the calculation formulas of l and W of the rectangle rotated by 30 degrees are as follows:
Figure BDA0001167721590000061
the length of the rotating rectangle is a, and the width of the rotating rectangle is b; by using
Figure BDA0001167721590000062
Taking the integer obtained by rounding as a scaling coefficient to obtain a Haar characteristic number of a rectangle inclined by 30 degrees, and then utilizing a formula
Figure BDA0001167721590000063
Calculating the number of Haar feature rectangles containing lane line edge features, wherein
Figure BDA0001167721590000064
RecSum (x, y) is an integral plot calculation.
Based on the AD-Adaboost algorithm, the weight of the weak classifier is set by using a new parameter calculation formula, and the weight is related to not only the false alarm rate but also the recognition capability of the positive sample; given a training sample set (x)1,y1),(x2,y2),...,(xN,yN) Wherein y isiFor negative and positive samples of 0 and 1, the weights are initialized to W for the negative and positive samples, respectively1(i) 1/2q,1/2p, q and p being negative and positive samples, respectively; the algorithm constructs a strong classifier through T cycles, wherein For T is 1,2, T, and calculates a weak classifier hjClassification error of
Figure BDA0001167721590000065
Selecting current classification errortMinimum weak classifier Ht. Then to HtCalculating the sum of positive weights:
Figure BDA0001167721590000066
updating the weight:
Figure BDA0001167721590000067
wherein the weak classifier weight parameter βtComprises the following steps:
Figure BDA0001167721590000068
the weights of the next cycle are then normalized
Figure BDA0001167721590000069
In the formula, ZtIs a normalization factor that is a function of,
Figure BDA00011677215900000610
then, the obtained t weak classifiers are combined into a strong classifier, and the obtained strong classifier is
Figure BDA00011677215900000611
Where θ is the discrimination threshold for the classification error rate, αt=-logt
When h (x) is 1, the pixel point x is a lane line feature point, and when h (x) is 0, x is not a lane line feature point, and in the image f (x, y), the pixel points whose values are h (x) and 0 are removed, and the remaining pixel points are lane line feature points.
And 6, designing a hyperbolic curve combined model aiming at the structured road.
In order to adapt to curved roads and straight roads, as shown in fig. 3, a hyperbolic model is established in the design:
Figure BDA0001167721590000071
wherein (u, v) is a point on a lane line on the image, h is a coordinate value of a lane vanishing point on a v axis of the image, k, b and c are parameters of a linear hyperbolic model, k is curvature, b is a relative direction of the lane line, c is a distance between the lane line and the v axis, an interested area I is a far vision field and is regarded as a hyperbolic curve, an interested area II is a near vision field and is approximately a straight line, parameters k and c of the left and right lane lines are the same, b is different, and when k is 0, the formula represents the straight line.
And 7, estimating lane model parameters by using an improved Randac algorithm.
In order to estimate lane model parameters and optimize the model to accurately describe lane boundaries, the design utilizes an improved RANSAC algorithm to perform lane line fitting. Aiming at the defect of long time consumption of the traditional algorithm, the improved algorithm adopts a pre-detection method to improve the running speed, 4 points are found each time, wherein 3 points are used for designing a model to obtain model parameters, one point is left for model matching, whether the model is on the model or not is judged, if the model is not on the model, the sample is abandoned for reselection, and if the model is on the model, the model is taken as a candidate model;
continuously searching other points for model matching, and if the number of points supporting the candidate model is more than or equal to a set threshold value, the candidate model is the target model; and if the number of points supporting the candidate model is less than the set threshold value, abandoning the candidate model and continuously searching for 4 points.
The invention well combines the advantages of each algorithm, has the capability of identifying the lane line under the condition of a complex road, and has high accuracy and good real-time property.

Claims (4)

1. A method for identifying highway sections and urban road lane lines is characterized by comprising the following steps:
step 1, acquiring a lane image through a camera, and obtaining internal and external parameters of the camera by using a camera parameter self-calibration method to obtain a lane plane vanishing line;
step 2, carrying out gray processing on the lane image and dividing an interested area;
step 3, performing median filtering on the image obtained in the step 2;
step 4, extracting the texture features of the lane lines from the image after median filtering by Gabor transformation;
step 5, describing lane line edge characteristics by adopting multi-angle Haar characteristics, and performing classification and identification by using an Adaboost classifier; the method specifically comprises the following steps:
step 5-1, adding a 30-degree inclined characteristic and a 60-degree inclined characteristic on the basis that the Haar characteristics use 0 degrees, 90 degrees and 45 degrees, and arranging non-inclined rectangles on the peripheries of the 30-degree, 45-degree and 60-degree inclined rectangles, wherein four sides of the non-inclined rectangles pass through four vertexes of the inclined rectangles;
assuming a window of L× W pixels, Haar features H (x, y, l, W, α), vertex coordinates (x, y), rectangular region length l, width W, and tilt angle α, the equations for l and W for the rotated rectangle are as follows:
Figure FDA0002329281450000011
a is the length of the rotating rectangle, and b is the width;
by using
Figure FDA0002329281450000012
Taking the integer obtained by the rounding as a scaling coefficient to solve the Haar characteristic number of the inclined rectangle, and utilizing a formula
Figure FDA0002329281450000013
Calculating the number of Haar feature rectangles containing lane line edge features, wherein
Figure FDA0002329281450000014
RecSum (x, y) is an integral chart calculation formula;
step 5-2, setting the weight of the weak classifier based on an AD-Adaboost algorithm by using a parameter calculation formula, wherein the weight is related to not only the false alarm rate but also the identification capability of the positive sample;
given a training sample set (x)1,y1),(x2,y2),...,(xN,yN) Wherein y isiFor negative and positive samples of 0 and 1, the weights are initialized to W for the negative and positive samples, respectively1(i) 1/2q,1/2p, q and p being negative and positive samples, respectively; the algorithm constructs a strong classifier through T cycles, wherein For T is 1,2, T, and calculates a weak classifier hjClassification error of
Figure FDA0002329281450000021
Selecting current classification errortMinimum weak classifier Ht
To HtCalculating the sum of positive weights:
Figure FDA0002329281450000022
updating the weight:
Figure FDA0002329281450000023
wherein the weak classifier weight parameter
Figure FDA0002329281450000024
Normalizing the weight of the next cycle
Figure FDA0002329281450000025
In the formula, ZtIs a normalization factor that is a function of,
Figure FDA0002329281450000026
combining the obtained t weak classifiers into a strong classifier to obtain a strong classifier
Figure FDA0002329281450000027
Where θ is the discrimination threshold of the classification error rate, αt=-logt
When h (x) is 1, the pixel point x is a lane line feature point, when h (x) is 0, x is not a lane line feature point, in the image f (x, y), the pixel points whose values are h (x) and 0 are removed, and the retained pixel points are lane line feature points;
step 6, designing a hyperbolic curve combination model aiming at the structured road;
step 7, estimating parameters of the hyperbolic curve combined model by using an improved Randac algorithm; the method specifically comprises the following steps:
finding 4 points each time, wherein 3 points are used for designing a model to obtain model parameters, remaining one point for model matching, and judging whether the point is on the model or not, if not, giving up the sample for reselection, and if so, taking the model as a candidate model; continuously searching other points for model matching, and if the number of points supporting the candidate model is more than or equal to a set threshold value, the candidate model is the target model; and if the number of points supporting the candidate model is less than the set threshold value, abandoning the candidate model and continuously searching for 4 points.
2. The highway section and urban road lane line identification method according to claim 1, wherein in step 1, an image coordinate system and a world coordinate system are established, a point coordinate corresponding relation between the two coordinate systems is designed, internal parameters are determined according to a calibration process of a camera, position parameters of the lane line in space coordinates are calculated according to the coordinate corresponding relation, and then the position of a lane vanishing line is obtained according to internal and external parameters.
3. The method for identifying highway sections and urban road/lane lines according to claim 1, wherein in step 2, the pixel ratio of the collected RGB image is 5: 4: the 1 weight is subjected to graying processing.
4. The method of claim 1, wherein the area of interest is defined as below the vanishing line, the area 1/3 below the vanishing line is defined as area of interest I, and the remaining area 2/3 is defined as area of interest ii.
CN201611084618.6A 2016-11-30 2016-11-30 Method for identifying highway section and urban road lane line Active CN106778551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611084618.6A CN106778551B (en) 2016-11-30 2016-11-30 Method for identifying highway section and urban road lane line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611084618.6A CN106778551B (en) 2016-11-30 2016-11-30 Method for identifying highway section and urban road lane line

Publications (2)

Publication Number Publication Date
CN106778551A CN106778551A (en) 2017-05-31
CN106778551B true CN106778551B (en) 2020-07-31

Family

ID=58913550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611084618.6A Active CN106778551B (en) 2016-11-30 2016-11-30 Method for identifying highway section and urban road lane line

Country Status (1)

Country Link
CN (1) CN106778551B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451526A (en) * 2017-06-09 2017-12-08 蔚来汽车有限公司 The structure of map and its application
CN108229386B (en) * 2017-12-29 2021-12-14 百度在线网络技术(北京)有限公司 Method, apparatus, and medium for detecting lane line
CN108573242A (en) * 2018-04-26 2018-09-25 南京行车宝智能科技有限公司 A kind of method for detecting lane lines and device
CN109670455A (en) * 2018-12-21 2019-04-23 联创汽车电子有限公司 Computer vision lane detection system and its detection method
CN109784234B (en) * 2018-12-29 2022-01-07 阿波罗智能技术(北京)有限公司 Right-angled bend identification method based on forward fisheye lens and vehicle-mounted equipment
CN110390483B (en) * 2019-07-24 2022-07-19 东南大学 Method for evaluating influence of bicycle express way on bus running speed
CN113015887A (en) * 2019-10-15 2021-06-22 谷歌有限责任公司 Navigation directions based on weather and road surface type
CA3196453A1 (en) * 2020-10-22 2022-04-28 Daxin LUO Lane line detection method and apparatus
CN113822226A (en) * 2021-10-15 2021-12-21 江西锦路科技开发有限公司 Deep learning-based lane line detection method in special environment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015105239A1 (en) * 2014-01-13 2015-07-16 삼성테크윈 주식회사 Vehicle and lane position detection system and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114B (en) * 2011-12-26 2013-07-31 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
KR20140006463A (en) * 2012-07-05 2014-01-16 현대모비스 주식회사 Method and apparatus for recognizing lane
CN105224909A (en) * 2015-08-19 2016-01-06 奇瑞汽车股份有限公司 Lane line confirmation method in lane detection system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015105239A1 (en) * 2014-01-13 2015-07-16 삼성테크윈 주식회사 Vehicle and lane position detection system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Lane detection and tracking using B-snake;Yue Wang et al;《Image and Vision Computing》;20040430;第22卷(第4期);269-280 *
Robust lane detection and tracking with ransac and Kalman filter;Amol Borkar et al;《2009 16th IEEE International Conference on Image Processing》;20100217;3261-3264 *
基于Adaboost算法的驾驶员疲劳驾驶检测;熊池亮;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20140115;第2014年卷(第1期);C034-159 *

Also Published As

Publication number Publication date
CN106778551A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106778551B (en) Method for identifying highway section and urban road lane line
CN107330376B (en) Lane line identification method and system
US11580647B1 (en) Global and local binary pattern image crack segmentation method based on robot vision
CN107862667B (en) Urban shadow detection and removal method based on high-resolution remote sensing image
CN107767383B (en) Road image segmentation method based on superpixels
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
Yuan et al. Robust lane detection for complicated road environment based on normal map
WO2015010451A1 (en) Method for road detection from one image
CN108280450A (en) A kind of express highway pavement detection method based on lane line
CN109919960B (en) Image continuous edge detection method based on multi-scale Gabor filter
Zhang et al. Vehicle recognition algorithm based on Haar-like features and improved Adaboost classifier
Li et al. Road lane detection with gabor filters
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN108171695A (en) A kind of express highway pavement detection method based on image procossing
CN107492076B (en) Method for suppressing vehicle shadow interference in expressway tunnel scene
CN110021029B (en) Real-time dynamic registration method and storage medium suitable for RGBD-SLAM
CN110580705B (en) Method for detecting building edge points based on double-domain image signal filtering
CN109886168B (en) Ground traffic sign identification method based on hierarchy
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN107563301A (en) Red signal detection method based on image processing techniques
CN109767442B (en) Remote sensing image airplane target detection method based on rotation invariant features
CN111652033A (en) Lane line detection method based on OpenCV
CN108520252B (en) Road sign identification method based on generalized Hough transform and wavelet transform
CN107977608B (en) Method for extracting road area of highway video image
CN113053164A (en) Parking space identification method using look-around image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant