CN111126306A - Lane line detection method based on edge features and sliding window - Google Patents

Lane line detection method based on edge features and sliding window Download PDF

Info

Publication number
CN111126306A
CN111126306A CN201911364127.0A CN201911364127A CN111126306A CN 111126306 A CN111126306 A CN 111126306A CN 201911364127 A CN201911364127 A CN 201911364127A CN 111126306 A CN111126306 A CN 111126306A
Authority
CN
China
Prior art keywords
image
lane line
pixel
search
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911364127.0A
Other languages
Chinese (zh)
Inventor
袁国慧
周祥东
张文超
彭真明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Rothwell Electric Co ltd
Original Assignee
Jiangsu Rothwell Electric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Rothwell Electric Co ltd filed Critical Jiangsu Rothwell Electric Co ltd
Priority to CN201911364127.0A priority Critical patent/CN111126306A/en
Publication of CN111126306A publication Critical patent/CN111126306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Abstract

The invention discloses a lane line detection method based on edge characteristics and a sliding window, which comprises the following steps: s1, shooting a plurality of checkerboard pictures from different angles by using a vehicle-mounted camera, calculating a distortion coefficient by using a Zhang scaling method, and correcting an image to be detected to obtain a corrected image; s2, carrying out perspective transformation on the corrected image to obtain a road surface top view; s3, extracting HLS color features in a road surface top view, extracting edge features by using a Sobel operator, and combining the two features to realize lane line segmentation; s4, solving a quadratic curve parameter of the lane line by adopting a quadratic curve as a lane line model and using sliding window polynomial fitting; s5, drawing a second-order polynomial fitted lane line, and performing back projection to the corrected image by using inverse perspective transformation to realize the visualization effect of lane line detection on the corrected image. The invention can give consideration to real-time property and robustness and can simply and quickly realize lane line detection.

Description

Lane line detection method based on edge features and sliding window
Technical Field
The invention relates to the technical field of automatic driving image processing, in particular to a lane line detection method based on edge features and a sliding window.
Background
With the rapid development of the automobile industry and the highway traffic construction, the traffic safety problem becomes more and more serious, the vehicle driving system is assisted by a computer technology means to become a trend of future development, and the automatic driving of the automobile is gradually realized by realizing the intellectualization of the automobile. The identification and detection of lane lines based on visual information is an important problem in the field of automatic driving, and the lane lines and the road background need to be quickly and accurately separated from video or digital image information shot by a vehicle-mounted camera, so that lane line position and trend information can be further obtained.
At present, the existing lane line detection methods at home and abroad can be mainly divided into a model-based method and a feature-based method. The model-based method mainly comprises the steps of obtaining pixel points positioned on a lane line in a lane line characteristic diagram based on different lane line curve models, such as a straight line model, a quadratic curve model, a hyperbolic curve model and the like, fitting parameters of the lane line curve model by using Hough transformation and the like, and obtaining lane line positions, namely trend information according to the parameters of the curve model. The method is characterized in that a lane line model is established by using more and complex parameters, the practicability of the more complex lane lines is poor, the algorithm design has greater complexity, and the robustness and the real-time performance of the algorithm detection are reduced to a certain extent. The feature-based method mainly comprises the steps of acquiring a lane line edge feature image from an original image based on lane line color features, edge features and the like, further determining deviation points of lane lines in different search areas by dividing the search areas, and judging lane line position and trend information according to position coordinates of the deviation points. The method has the characteristics that the algorithm design is simple, the accuracy and the real-time performance of lane line detection can be still ensured under the condition of ensuring lower algorithm complexity, and the method can be suitable for more complicated lane lines which are difficult to establish curve models.
Disclosure of Invention
Aiming at the defects of more and more complex model parameters and poor real-time performance in the existing algorithm, the invention provides a lane line detection method based on edge characteristics and a sliding window.
The invention specifically adopts the following technical scheme for realizing the purpose:
a lane line detection method based on edge features and a sliding window comprises the following steps:
s1: shooting a plurality of checkerboard pictures from different angles by using a vehicle-mounted camera, calculating a distortion coefficient by using a Zhang scaling method, and carrying out distortion correction on an image to be detected shot by the vehicle-mounted camera according to the distortion coefficient to obtain a corrected image;
s2: carrying out perspective transformation on the corrected image to obtain a top view of the road surface;
s3: HLS color features are extracted from a top view of the pavement, edge features are extracted by using a Sobel operator, and the two features are combined to realize lane line segmentation;
s4: adopting a quadratic curve as a lane line model, and fitting by using a sliding window polynomial to obtain a quadratic curve parameter of a lane line;
s5: and drawing a secondary polynomial fitted lane line according to the secondary curve parameters, and performing back projection to the corrected image by using inverse perspective transformation to realize the visualization effect of lane line detection on the corrected image.
Further, the S1 specifically includes the following steps:
s1.1: setting a checkerboard calibration plate, and shooting 10 distorted checkerboard images from different angles and distances by moving a vehicle-mounted camera;
s1.2: finding the checkerboard angular points in each distorted checkerboard image, and enabling the position information of the distorted checkerboard angular points in the pixel coordinate system to be in one-to-one correspondence with the position information of the checkerboard angular points in the world coordinate system;
s1.3: calibrating the camera according to the corner point information obtained in the S1.2 by using a Zhang calibration method to obtain an internal reference matrix mtx and a distortion coefficient matrix dist of the vehicle-mounted camera;
s1.4: and carrying out distortion correction on the image A1 to be detected shot by the vehicle-mounted camera according to the internal reference matrix mtx and the distortion coefficient matrix dist obtained in the S1.3 to obtain a corrected image A2.
Further, the S2 specifically includes the following steps:
s2.1: acquiring position coordinate arrays src of four key pixel points in the corrected image A2 and position coordinate arrays dst corresponding to the four pixel points after image torsion;
s2.2: performing perspective transformation calculation by using the arrays src and dst in the S2.1 to obtain a torsion matrix M for perspective transformation and a reverse torsion matrix Minv for recovering an image;
s2.3: and performing perspective transformation on the corrected image A2 according to the torsion matrix M in S2.2 to obtain a road surface top view A3.
Further, the S3 specifically includes the following steps:
s3.1: for the road surface top view A3, converting the road surface top view A3 from an RGB color space to an HLS color space, applying threshold value filtering to an S channel and an L channel for binarization processing, setting the pixel value at the pixel point meeting a threshold value condition to be 1, and setting the pixel value to be 0 if the pixel value does not meet the threshold value condition, so as to obtain channel information P1 under the threshold value condition, wherein the threshold value condition of the S channel is [140, 255], and the threshold value condition of the L channel is [120, 255 ];
s3.2: for the road surface top view A3, converting an image A3 into a gray scale image to obtain an image A4, and performing edge extraction processing on the image A4, wherein the edge extraction processing uses a Sobel operator;
wherein the transverse Sobel operator is
Figure BDA0002337961120000031
The vertical Sobel operator is
Figure BDA0002337961120000032
S3.3: performing convolution operation on the image A4 by using a transverse Sobel operator to obtain a transverse gradient gx of the image A4, normalizing the gx to [0, 255], performing binarization processing by applying threshold filtering, setting the pixel value at the pixel point meeting a threshold condition to be 1, and setting the pixel value to be 0 if the pixel value is not met, so as to obtain channel information P2 under the threshold condition, wherein the threshold condition is [25, 200 ];
s3.4: performing convolution operation on the image A4 by using a longitudinal Sobel operator to obtain a longitudinal gradient gy of the image A4, normalizing the gy to [0, 255], performing binarization processing by applying threshold filtering, setting the pixel value at the pixel point meeting a threshold condition to be 1, and setting the pixel value to be 0 if the pixel value is not met, so as to obtain channel information P3 under the threshold condition, wherein the threshold condition is [25, 200 ];
s3.5: from the transverse gradient gx and the longitudinal gradient gy of the image a4 in S3.3 and S3.4, the gradient magnitude mag and the gradient direction dir are calculated:
Figure BDA0002337961120000033
Figure BDA0002337961120000034
further normalizing mag to [0, 255], and performing binarization processing by applying threshold filtering, wherein the pixel value at the pixel point meeting the threshold condition is 1, and if the pixel value is not met, the pixel value is 0, so as to obtain channel information P4 under the threshold condition, wherein the threshold condition is [30, 100 ]; threshold filtering is applied to dir to carry out binarization processing, the pixel value of a pixel point meeting a threshold condition is set to be 1, if the pixel value of the pixel point does not meet the threshold condition, the pixel value is set to be 0, and channel information P5 under the threshold condition is obtained, wherein the threshold condition is [0.8, 1.2 ];
s3.6: the stack channel information P1, P2, P3, P4, and P5 obtain channel information P6, and if P2 is equal to 1 and P3 is equal to 1, or P4 is equal to 1 and P5 is equal to 1 for any pixel point, the channel information P6 is 1, and further the pixel values of the pixel points P1 and P6 are binary or operated, and when at least one of P1 and P6 is 1, the pixel value of the pixel point is set to 1, so that a binary image, that is, a lane line edge feature image a5 is obtained.
Further, the S4 specifically includes the following steps:
s4.1: dividing the image A5 into two search areas from the middle into a left search area and a right search area according to the x-axis direction, then performing histogram statistics on pixels of the left search area and the right search area of the image A5 in the x direction, and positioning histogram peak positions x1 and x2 as search starting points x1 and x2 of two lane lines;
s4.2: obtaining a lane boundary by using sliding window polynomial fitting, setting the width and height of a rectangular search window, wherein the width is set manually, the height is the picture size divided by the number of preset search windows, for a left lane line and a right lane line, respectively taking a search starting point x1 and a search starting point x2 as base points of a current search window, and respectively taking the current base points as the center of the bottom edge of the search window to obtain a first search window; the preset number of search windows is 9;
s4.3: counting the number of non-zero pixels in a search window area of a first search window, and calculating the mean value of non-zero pixel coordinates as the position of the center point of a lane line of the current search window; if the number m of non-zero pixels in the search window is greater than the minimum number n of non-zero pixels under the preset threshold condition, taking the average value of the abscissa of the non-zero pixels in the search window as the base-point abscissa of the generated second search window, and if m does not meet the threshold condition n, taking the base-point abscissa of the generated second search window as the base-point abscissa of the first search window, and taking the base-point ordinate of the generated second search window as the ordinate of the upper boundary of the first search window;
s4.4: and taking the generated second search window as the first search window in the S4.3, repeatedly executing the S4.3, sequentially and iteratively detecting until the upper boundary of the generated second search window reaches the upper boundary of the image A5, and performing fourth-order polynomial fitting on the central point positions of the lane lines of all the search windows after the circulation is finished to obtain the lane line curve parameters corresponding to the current search.
Further, the S5 specifically includes the following steps:
s5.1: creating a blank picture with the size consistent with that of the image A5, and drawing a fourth-order polynomial curve corresponding to the left lane line and the right lane line on the picture according to curve parameters of the left lane line and the right lane line obtained by fitting in S4.4;
s5.2: filling a polygon between the left and right curves with colors to obtain a picture A6;
s5.3: the picture A6 is subjected to inverse perspective transformation to the view angle of the vehicle-mounted camera by using the inverse torsion matrix Minv, and a lane marking picture A7 is obtained;
s5.4: the lane line marking picture A7 is covered on the corrected image A2 by the weight of 0.3, namely the weight of multiplying the pixel value of each point in the lane line marking picture A7 by 0.3 is correspondingly added with the pixel value of each point in the corrected image A2, and the visualization effect of the lane line detection on the original image is realized.
The invention has the following beneficial effects:
1. the method comprises the steps of calibrating a camera of the vehicle-mounted camera by using a Zhang calibration method, correcting an image to be detected obtained by directly shooting the vehicle-mounted camera to obtain a corrected image, removing influences caused by distortion of the image to be detected, further carrying out perspective transformation on the corrected image to obtain a pavement top view, extracting color features of an HLS channel of a lane line and edge features based on a Soble operator from the pavement top view, further fusing the color features and the edge features, reducing interference caused by environmental noise, realizing accurate segmentation of the lane line, and still keeping high robustness for lane line detection of video images shot by different vehicle-mounted cameras.
2. The method adopts sliding window polynomial fitting, adjusts the position of a search window by setting the threshold condition of the number of effective pixels in the search window, further positions the central point position of lane lines in a plurality of search windows, uses a fourth-order polynomial to fit the central point position of the lane lines to obtain the position and the trend information of the lane lines, finally back projects the lane lines to a correction image to realize the visual effect of lane line detection on the correction image, and can ensure the robustness and the accuracy of the lane line detection for more complex lane lines.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a schematic view of a checkerboard used for calibration of the vehicle camera of the present invention;
FIG. 3 is a schematic view of a road image taken directly by the vehicle-mounted camera of the present invention;
FIG. 4 is a schematic view of a corrected image of a road image captured by the on-board camera according to the present invention;
FIG. 5 is a schematic diagram of a top view of a road surface after a perspective transformation of a corrected image according to the present invention;
FIG. 6 is a schematic diagram of lane line edge features after color and edge feature combinations are extracted from a top view of a road surface according to the present invention;
FIG. 7 is a schematic view of a search window and a fitted lane line according to the present invention;
fig. 8 is a schematic view showing the lane line detection result visualization according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a lane line detection method based on edge features and a sliding window, including the following steps:
s1: shooting a plurality of checkerboard pictures from different angles by using a vehicle-mounted camera, calculating a distortion coefficient by using a Zhang scaling method, and carrying out distortion correction on an image to be detected shot by the vehicle-mounted camera according to the distortion coefficient to obtain a corrected image;
s2: carrying out perspective transformation on the corrected image to obtain a top view of the road surface;
s3: HLS color features are extracted from a top view of the pavement, edge features are extracted by using a Sobel operator, and the two features are combined to realize lane line segmentation;
s4: adopting a quadratic curve as a lane line model, and fitting by using a sliding window polynomial to obtain a quadratic curve parameter of a lane line;
s5: and drawing a secondary polynomial fitted lane line according to the secondary curve parameters, and performing back projection to the corrected image by using inverse perspective transformation to realize the visualization effect of lane line detection on the corrected image.
Specifically, the S1 specifically includes the following steps:
s1.1: setting a checkerboard calibration plate, and shooting 10 distorted checkerboard images from different angles and distances by moving a vehicle-mounted camera, as shown in FIG. 2;
s1.2: finding the checkerboard angular points in each distorted checkerboard image, and enabling the position information of the distorted checkerboard angular points in the pixel coordinate system to be in one-to-one correspondence with the position information of the checkerboard angular points in the world coordinate system;
s1.3: calibrating the camera according to the corner point information obtained in the S1.2 by using a Zhang calibration method to obtain an internal reference matrix mtx and a distortion coefficient matrix dist of the vehicle-mounted camera;
s1.4: and (3) carrying out distortion correction on the image A1 (shown in figure 3) to be detected shot by the vehicle-mounted camera according to the internal reference matrix mtx and the distortion coefficient matrix dist obtained in the S1.3 to obtain a corrected image A2 (shown in figure 4) corresponding to the figure 3.
Specifically, the S2 specifically includes the following steps:
s2.1: acquiring position coordinate arrays src of four key pixel points in the corrected image A2 and position coordinate arrays dst corresponding to the four pixel points after image torsion;
s2.2: performing perspective transformation calculation by using the arrays src and dst in the S2.1 to obtain a torsion matrix M for perspective transformation and a reverse torsion matrix Minv for recovering an image;
s2.3: the corrected image a2 is subjected to perspective transformation according to the torsion matrix M in S2.2 to obtain a road surface top view A3 (as shown in fig. 5) corresponding to fig. 4.
Specifically, the S3 specifically includes the following steps:
s3.1: for the road surface top view A3, converting the road surface top view A3 from an RGB color space to an HLS color space, applying threshold value filtering to an S channel and an L channel for binarization processing, setting the pixel value at the pixel point meeting a threshold value condition to be 1, and setting the pixel value to be 0 if the pixel value does not meet the threshold value condition, so as to obtain channel information P1 under the threshold value condition, wherein the threshold value condition of the S channel is [140, 255], and the threshold value condition of the L channel is [120, 255 ];
s3.2: for the road surface top view A3, converting an image A3 into a gray scale image to obtain an image A4, and performing edge extraction processing on the image A4, wherein the edge extraction processing uses a Sobel operator;
wherein the transverse Sobel operator is
Figure BDA0002337961120000071
The vertical Sobel operator is
Figure BDA0002337961120000072
S3.3: performing convolution operation on the image A4 by using a transverse Sobel operator to obtain a transverse gradient gx of the image A4, normalizing the gx to [0, 255], performing binarization processing by applying threshold filtering, setting the pixel value at the pixel point meeting a threshold condition to be 1, and setting the pixel value to be 0 if the pixel value is not met, so as to obtain channel information P2 under the threshold condition, wherein the threshold condition is [25, 200 ];
s3.4: performing convolution operation on the image A4 by using a longitudinal Sobel operator to obtain a longitudinal gradient gy of the image A4, normalizing the gy to [0, 255], performing binarization processing by applying threshold filtering, setting the pixel value at the pixel point meeting a threshold condition to be 1, and setting the pixel value to be 0 if the pixel value is not met, so as to obtain channel information P3 under the threshold condition, wherein the threshold condition is [25, 200 ];
s3.5: from the transverse gradient gx and the longitudinal gradient gy of the image a4 in S3.3 and S3.4, the gradient magnitude mag and the gradient direction dir are calculated:
Figure BDA0002337961120000081
Figure BDA0002337961120000082
further normalizing mag to [0, 255], and performing binarization processing by applying threshold filtering, wherein the pixel value at the pixel point meeting the threshold condition is 1, and if the pixel value is not met, the pixel value is 0, so as to obtain channel information P4 under the threshold condition, wherein the threshold condition is [30, 100 ]; threshold filtering is applied to dir to carry out binarization processing, the pixel value of a pixel point meeting a threshold condition is set to be 1, if the pixel value of the pixel point does not meet the threshold condition, the pixel value is set to be 0, and channel information P5 under the threshold condition is obtained, wherein the threshold condition is [0.8, 1.2 ];
s3.6: stack channel information P1, P2, P3, P4, and P5 obtain channel information P6, if P2 is equal to 1 and P3 is equal to 1, or P4 is equal to 1 and P5 is equal to 1 for any pixel point, the channel information P6 is 1, further binary or operation is performed on P1 and P6, and when at least one of P1 and P6 is 1, the pixel value of the pixel point is set to 1, so as to obtain a binary image, namely, a lane line edge feature image a5 (as shown in fig. 6);
specifically, the S4 specifically includes the following steps:
s4.1: dividing the image A5 into two search areas from the middle into a left search area and a right search area according to the x-axis direction, then performing histogram statistics on pixels of the left search area and the right search area of the image A5 in the x direction, and positioning histogram peak positions x1 and x2 as search starting points x1 and x2 of two lane lines;
s4.2: obtaining a lane boundary by using sliding window polynomial fitting, setting the width and height of a rectangular search window, wherein the width is set manually, the height is the picture size divided by the number of preset search windows, for a left lane line and a right lane line, respectively taking a search starting point x1 and a search starting point x2 as base points of a current search window, and respectively taking the current base points as the center of the bottom edge of the search window to obtain a first search window; the preset number of search windows is 9;
s4.3: counting the number of non-zero pixels in a search window area of a first search window, and calculating the mean value of non-zero pixel coordinates as the position of the center point of a lane line of the current search window; if the number m of non-zero pixels in the search window is greater than the minimum number n of non-zero pixels under the preset threshold condition, taking the average value of the abscissa of the non-zero pixels in the search window as the base-point abscissa of the generated second search window, and if m does not meet the threshold condition n, taking the base-point abscissa of the generated second search window as the base-point abscissa of the first search window, and taking the base-point ordinate of the generated second search window as the ordinate of the upper boundary of the first search window;
s4.4: and taking the generated second search window as the first search window in the step S4.3, repeatedly executing the step S4.3, sequentially and iteratively detecting until the upper boundary of the generated second search window reaches the upper boundary of the image A5, obtaining all the search windows as shown in the step S7, and after the loop is finished, performing fourth-order polynomial fitting on the central point positions of the lane lines of all the search windows to obtain the lane line curve parameters corresponding to the current search.
Specifically, the S5 specifically includes the following steps:
s5.1: creating a blank picture with the size consistent with that of the image A5, drawing a fourth-order polynomial curve corresponding to the left lane line and the right lane line on the picture according to curve parameters of the left lane line and the right lane line obtained by fitting in S4.4, and obtaining the left lane line curve and the right lane line curve as shown in FIG. 7;
s5.2: filling a polygon between the left and right curves with colors to obtain a picture A6;
s5.3: the picture A6 is subjected to inverse perspective transformation to the view angle of the vehicle-mounted camera by using the inverse torsion matrix Minv, and a lane marking picture A7 is obtained;
s5.4: the lane marking picture a7 is overlaid on the corrected image a2 with a weight of 0.3, that is, the weight obtained by multiplying the pixel value of each point in the lane marking picture a7 by 0.3 is added to the pixel value of each point in the corrected image a2, as shown in fig. 8, so that the effect of visualizing the lane detection on the original image is achieved.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (6)

1. A lane line detection method based on edge features and a sliding window is characterized by comprising the following steps:
s1: shooting a plurality of checkerboard pictures from different angles by using a vehicle-mounted camera, calculating a distortion coefficient by using a Zhang scaling method, and carrying out distortion correction on an image to be detected shot by the vehicle-mounted camera according to the distortion coefficient to obtain a corrected image;
s2: carrying out perspective transformation on the corrected image to obtain a top view of the road surface;
s3: HLS color features are extracted from a top view of the pavement, edge features are extracted by using a Sobel operator, and the two features are combined to realize lane line segmentation;
s4: adopting a quadratic curve as a lane line model, and fitting by using a sliding window polynomial to obtain a quadratic curve parameter of a lane line;
s5: and drawing a secondary polynomial fitted lane line according to the secondary curve parameters, and performing back projection to the corrected image by using inverse perspective transformation to realize the visualization effect of lane line detection on the corrected image.
2. The method according to claim 1, wherein the step S1 specifically includes the following steps:
s1.1: setting a checkerboard calibration plate, and shooting 10 distorted checkerboard images from different angles and distances by moving a vehicle-mounted camera;
s1.2: finding the checkerboard angular points in each distorted checkerboard image, and enabling the position information of the distorted checkerboard angular points in the pixel coordinate system to be in one-to-one correspondence with the position information of the checkerboard angular points in the world coordinate system;
s1.3: calibrating the camera according to the corner point information obtained in the S1.2 by using a Zhang calibration method to obtain an internal reference matrix mtx and a distortion coefficient matrix dist of the vehicle-mounted camera;
s1.4: and carrying out distortion correction on the image A1 to be detected shot by the vehicle-mounted camera according to the internal reference matrix mtx and the distortion coefficient matrix dist obtained in the S1.3 to obtain a corrected image A2.
3. The method according to claim 1, wherein the step S2 specifically includes the following steps:
s2.1: acquiring position coordinate arrays src of four key pixel points in the corrected image A2 and position coordinate arrays dst corresponding to the four pixel points after image torsion;
s2.2: performing perspective transformation calculation by using the arrays src and dst in the S2.1 to obtain a torsion matrix M for perspective transformation and a reverse torsion matrix Minv for recovering an image;
s2.3: and performing perspective transformation on the corrected image A2 according to the torsion matrix M in S2.2 to obtain a road surface top view A3.
4. The method according to claim 1, wherein the step S3 specifically includes the following steps:
s3.1: for the road surface top view A3, converting the road surface top view A3 from an RGB color space to an HLS color space, applying threshold value filtering to an S channel and an L channel for binarization processing, setting the pixel value at the pixel point meeting a threshold value condition to be 1, and setting the pixel value to be 0 if the pixel value does not meet the threshold value condition, so as to obtain channel information P1 under the threshold value condition, wherein the threshold value condition of the S channel is [140, 255], and the threshold value condition of the L channel is [120, 255 ];
s3.2: for the road surface top view A3, converting an image A3 into a gray scale image to obtain an image A4, and performing edge extraction processing on the image A4, wherein the edge extraction processing uses a Sobel operator;
wherein the transverse Sobel operator is
Figure FDA0002337961110000021
The vertical Sobel operator is
Figure FDA0002337961110000022
S3.3: performing convolution operation on the image A4 by using a transverse Sobel operator to obtain a transverse gradient gx of the image A4, normalizing the gx to [0, 255], performing binarization processing by applying threshold filtering, setting the pixel value at the pixel point meeting a threshold condition to be 1, and setting the pixel value to be 0 if the pixel value is not met, so as to obtain channel information P2 under the threshold condition, wherein the threshold condition is [25, 200 ];
s3.4: performing convolution operation on the image A4 by using a longitudinal Sobel operator to obtain a longitudinal gradient gy of the image A4, normalizing the gy to [0, 255], performing binarization processing by applying threshold filtering, setting the pixel value at the pixel point meeting a threshold condition to be 1, and setting the pixel value to be 0 if the pixel value is not met, so as to obtain channel information P3 under the threshold condition, wherein the threshold condition is [25, 200 ];
s3.5: from the transverse gradient gx and the longitudinal gradient gy of the image a4 in S3.3 and S3.4, the gradient magnitude mag and the gradient direction dir are calculated:
Figure FDA0002337961110000023
Figure FDA0002337961110000024
further normalizing mag to [0, 255], and performing binarization processing by applying threshold filtering, wherein the pixel value at the pixel point meeting the threshold condition is 1, and if the pixel value is not met, the pixel value is 0, so as to obtain channel information P4 under the threshold condition, wherein the threshold condition is [30, 100 ]; threshold filtering is applied to dir to carry out binarization processing, the pixel value of a pixel point meeting a threshold condition is set to be 1, if the pixel value of the pixel point does not meet the threshold condition, the pixel value is set to be 0, and channel information P5 under the threshold condition is obtained, wherein the threshold condition is [0.8, 1.2 ];
s3.6: the stack channel information P1, P2, P3, P4, and P5 obtain channel information P6, and if P2 is equal to 1 and P3 is equal to 1, or P4 is equal to 1 and P5 is equal to 1 for any pixel point, the channel information P6 is 1, and further the pixel values of the pixel points P1 and P6 are binary or operated, and when at least one of P1 and P6 is 1, the pixel value of the pixel point is set to 1, so that a binary image, that is, a lane line edge feature image a5 is obtained.
5. The method according to claim 1, wherein the step S4 specifically includes the following steps:
s4.1: dividing the image A5 into two search areas from the middle into a left search area and a right search area according to the x-axis direction, then performing histogram statistics on pixels of the left search area and the right search area of the image A5 in the x direction, and positioning histogram peak positions x1 and x2 as search starting points x1 and x2 of two lane lines;
s4.2: obtaining a lane boundary by using sliding window polynomial fitting, setting the width and height of a rectangular search window, wherein the width is set manually, the height is the picture size divided by the number of preset search windows, for a left lane line and a right lane line, respectively taking a search starting point x1 and a search starting point x2 as base points of a current search window, and respectively taking the current base points as the center of the bottom edge of the search window to obtain a first search window; the preset number of search windows is 9;
s4.3: counting the number of non-zero pixels in a search window area of a first search window, and calculating the mean value of non-zero pixel coordinates as the position of the center point of a lane line of the current search window; if the number m of non-zero pixels in the search window is greater than the minimum number n of non-zero pixels under the preset threshold condition, taking the average value of the abscissa of the non-zero pixels in the search window as the base-point abscissa of the generated second search window, and if m does not meet the threshold condition n, taking the base-point abscissa of the generated second search window as the base-point abscissa of the first search window, and taking the base-point ordinate of the generated second search window as the ordinate of the upper boundary of the first search window;
s4.4: and taking the generated second search window as the first search window in the S4.3, repeatedly executing the S4.3, sequentially and iteratively detecting until the upper boundary of the generated second search window reaches the upper boundary of the image A5, and performing fourth-order polynomial fitting on the central point positions of the lane lines of all the search windows after the circulation is finished to obtain the lane line curve parameters corresponding to the current search.
6. The method according to claim 1, wherein the step S5 specifically includes the following steps:
s5.1: creating a blank picture with the size consistent with that of the image A5, and drawing a fourth-order polynomial curve corresponding to the left lane line and the right lane line on the picture according to curve parameters of the left lane line and the right lane line obtained by fitting in S4.4;
s5.2: filling a polygon between the left and right curves with colors to obtain a picture A6;
s5.3: the picture A6 is subjected to inverse perspective transformation to the view angle of the vehicle-mounted camera by using the inverse torsion matrix Minv, and a lane marking picture A7 is obtained;
s5.4: the lane line marking picture A7 is covered on the corrected image A2 by the weight of 0.3, namely the weight of multiplying the pixel value of each point in the lane line marking picture A7 by 0.3 is correspondingly added with the pixel value of each point in the corrected image A2, and the visualization effect of the lane line detection on the original image is realized.
CN201911364127.0A 2019-12-26 2019-12-26 Lane line detection method based on edge features and sliding window Pending CN111126306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911364127.0A CN111126306A (en) 2019-12-26 2019-12-26 Lane line detection method based on edge features and sliding window

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911364127.0A CN111126306A (en) 2019-12-26 2019-12-26 Lane line detection method based on edge features and sliding window

Publications (1)

Publication Number Publication Date
CN111126306A true CN111126306A (en) 2020-05-08

Family

ID=70502853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911364127.0A Pending CN111126306A (en) 2019-12-26 2019-12-26 Lane line detection method based on edge features and sliding window

Country Status (1)

Country Link
CN (1) CN111126306A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111693566A (en) * 2020-05-12 2020-09-22 江苏理工学院 Automobile exhaust detection device and detection method based on infrared thermal imaging technology
CN112017249A (en) * 2020-08-18 2020-12-01 东莞正扬电子机械有限公司 Vehicle-mounted camera roll angle obtaining and mounting angle correcting method and device
CN112241714A (en) * 2020-10-22 2021-01-19 北京字跳网络技术有限公司 Method and device for identifying designated area in image, readable medium and electronic equipment
CN112270690A (en) * 2020-10-12 2021-01-26 淮阴工学院 Self-adaptive night lane line detection method based on improved CLAHE and sliding window search
CN112488046A (en) * 2020-12-15 2021-03-12 中国科学院地理科学与资源研究所 Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN113343742A (en) * 2020-12-31 2021-09-03 浙江合众新能源汽车有限公司 Lane line detection method and lane line detection system
CN114089786A (en) * 2021-09-29 2022-02-25 北京航空航天大学杭州创新研究院 Autonomous inspection system based on unmanned aerial vehicle vision and along mountain highway
CN115908201A (en) * 2023-01-09 2023-04-04 武汉凡德智能科技有限公司 Hot area quick correction method and device for image distortion
US11682190B2 (en) * 2019-04-10 2023-06-20 Axis Ab Method, system, and device for detecting an object in a distorted image
CN112241714B (en) * 2020-10-22 2024-04-26 北京字跳网络技术有限公司 Method and device for identifying designated area in image, readable medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895375A (en) * 2017-11-23 2018-04-10 中国电子科技集团公司第二十八研究所 The complicated Road extracting method of view-based access control model multiple features
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
CN110222658A (en) * 2019-06-11 2019-09-10 腾讯科技(深圳)有限公司 The acquisition methods and device of road vanishing point position
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A kind of method for detecting lane lines and system based on homography conversion and characteristic window

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895375A (en) * 2017-11-23 2018-04-10 中国电子科技集团公司第二十八研究所 The complicated Road extracting method of view-based access control model multiple features
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
CN110222658A (en) * 2019-06-11 2019-09-10 腾讯科技(深圳)有限公司 The acquisition methods and device of road vanishing point position
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A kind of method for detecting lane lines and system based on homography conversion and characteristic window

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG1688899: "车道检测(Advanced Lane Finding Project", 《GITHUB.COM》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11682190B2 (en) * 2019-04-10 2023-06-20 Axis Ab Method, system, and device for detecting an object in a distorted image
CN111693566B (en) * 2020-05-12 2023-04-28 江苏理工学院 Automobile exhaust detection device and detection method based on infrared thermal imaging technology
CN111693566A (en) * 2020-05-12 2020-09-22 江苏理工学院 Automobile exhaust detection device and detection method based on infrared thermal imaging technology
CN112017249A (en) * 2020-08-18 2020-12-01 东莞正扬电子机械有限公司 Vehicle-mounted camera roll angle obtaining and mounting angle correcting method and device
CN112270690A (en) * 2020-10-12 2021-01-26 淮阴工学院 Self-adaptive night lane line detection method based on improved CLAHE and sliding window search
CN112270690B (en) * 2020-10-12 2022-04-26 淮阴工学院 Self-adaptive night lane line detection method based on improved CLAHE and sliding window search
CN112241714A (en) * 2020-10-22 2021-01-19 北京字跳网络技术有限公司 Method and device for identifying designated area in image, readable medium and electronic equipment
CN112241714B (en) * 2020-10-22 2024-04-26 北京字跳网络技术有限公司 Method and device for identifying designated area in image, readable medium and electronic equipment
CN112488046A (en) * 2020-12-15 2021-03-12 中国科学院地理科学与资源研究所 Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN112488046B (en) * 2020-12-15 2021-07-16 中国科学院地理科学与资源研究所 Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN113343742A (en) * 2020-12-31 2021-09-03 浙江合众新能源汽车有限公司 Lane line detection method and lane line detection system
CN114089786A (en) * 2021-09-29 2022-02-25 北京航空航天大学杭州创新研究院 Autonomous inspection system based on unmanned aerial vehicle vision and along mountain highway
CN115908201A (en) * 2023-01-09 2023-04-04 武汉凡德智能科技有限公司 Hot area quick correction method and device for image distortion
CN115908201B (en) * 2023-01-09 2023-11-28 武汉凡德智能科技有限公司 Method and device for quickly correcting hot zone of image distortion

Similar Documents

Publication Publication Date Title
CN111126306A (en) Lane line detection method based on edge features and sliding window
CN109785291B (en) Lane line self-adaptive detection method
CN109886896B (en) Blue license plate segmentation and correction method
CN109784344B (en) Image non-target filtering method for ground plane identification recognition
CN109145915B (en) Rapid distortion correction method for license plate under complex scene
US7664315B2 (en) Integrated image processor
CN110414385B (en) Lane line detection method and system based on homography transformation and characteristic window
WO2022078074A1 (en) Method and system for detecting position relation between vehicle and lane line, and storage medium
CN109583365B (en) Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve
CN105488501A (en) Method for correcting license plate slant based on rotating projection
CN107563330B (en) Horizontal inclined license plate correction method in surveillance video
CN110110608B (en) Forklift speed monitoring method and system based on vision under panoramic monitoring
CN113298810B (en) Road line detection method combining image enhancement and depth convolution neural network
Youjin et al. A robust lane detection method based on vanishing point estimation
CN106875430B (en) Single moving target tracking method and device based on fixed form under dynamic background
CN114241438B (en) Traffic signal lamp rapid and accurate identification method based on priori information
CN113762134B (en) Method for detecting surrounding obstacles in automobile parking based on vision
CN111881878B (en) Lane line identification method for look-around multiplexing
CN114241436A (en) Lane line detection method and system for improving color space and search window
CN112116644A (en) Vision-based obstacle detection method and device and obstacle distance calculation method and device
CN111428538B (en) Lane line extraction method, device and equipment
CN108389177B (en) Vehicle bumper damage detection method and traffic safety early warning method
CN111626180B (en) Lane line detection method and device based on polarization imaging
CN112070081B (en) Intelligent license plate recognition method based on high-definition video
JP4639044B2 (en) Contour shape extraction device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200508