CN111783666A - Rapid lane line detection method based on continuous video frame corner feature matching - Google Patents

Rapid lane line detection method based on continuous video frame corner feature matching Download PDF

Info

Publication number
CN111783666A
CN111783666A CN202010625087.7A CN202010625087A CN111783666A CN 111783666 A CN111783666 A CN 111783666A CN 202010625087 A CN202010625087 A CN 202010625087A CN 111783666 A CN111783666 A CN 111783666A
Authority
CN
China
Prior art keywords
image
lane line
lane
points
video frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010625087.7A
Other languages
Chinese (zh)
Inventor
庄博阳
左瑞
申玉京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Computer Technology and Applications
Original Assignee
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Computer Technology and Applications filed Critical Beijing Institute of Computer Technology and Applications
Priority to CN202010625087.7A priority Critical patent/CN111783666A/en
Publication of CN111783666A publication Critical patent/CN111783666A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a rapid lane line detection method based on continuous video frame corner feature matching, which comprises the following steps: calculating an image distortion matrix; carrying out image acquisition and image correction; using a corner detection and matching method to define a dynamic region of interest, if the lane line identification result of the previous frame is failure, selecting the region of interest as a global region, and skipping the step; calculating the coordinate values of the feature point pairs of the front and rear video frames; under the condition that the lane line recognition result of the previous frame of image is true, performing optical flow estimation on points with obvious angular point characteristics in the previous frame of image, and solving coordinate values of characteristic points of the previous and next video frames by adopting a least square solution; estimating the position of a lane line in the image of the frame, and selecting and identifying an interested area; carrying out image binarization processing in the region of interest to obtain a binarized image; carrying out perspective transformation on the image; searching a lane line pixel point on the top view, and performing lane line fitting by using a polynomial equation; and (5) counting the curvature of the lane line and the lane departure distance, and marking the lane line curvature and the lane departure distance into the original image.

Description

Rapid lane line detection method based on continuous video frame corner feature matching
Technical Field
The invention belongs to the technology of environment perception and image processing in intelligent vehicles, and particularly relates to a rapid lane line detection method based on continuous video frame corner feature matching.
Background
According to the statistics of the ministry of public security, 3214 thousands of motor vehicles are newly registered in 2019 nationwide, and the number of motor vehicles is kept to 3.48 hundred million. The reserved quantity of private cars firstly breaks through 2 hundred million cars, and reaches 2.07 hundred million cars. The total mileage of Chinese roads reaches 484.65 kilometers, the total mileage of express roads reaches 14.26 kilometers, and the first place in the world is. The automobile holding capacity and the total mileage of the main road are continuously increased, so that the occurrence frequency of traffic accidents is increased; meanwhile, the traffic jam degree is more dramatic due to the relatively concentrated vehicle traveling areas. According to the statistical data of the traffic administration of the ministry of public security, 238351 road traffic accidents occur in 2019 all the country, 67759 people die, 275125 people are injured, and 9.1 million yuan of direct property loss is caused. The frequent occurrence of traffic accidents endangers the life and property safety of people, causes the waste of social resources and causes serious direct and indirect economic losses.
Therefore, each country gives great support to an Intelligent Transportation System (Intelligent Transportation System), and meanwhile, enterprises and scientific research structures of each country in the world also invest a great deal of manpower and material resources to research. Among them, Intelligent Vehicles (Intelligent Vehicles) play a role as a key component of an Intelligent transportation system. The intelligent vehicle senses the environment around the vehicle in real time through sensors such as a vehicle-mounted camera and a radar, resumes a local map through an algorithm according to information obtained by the sensors, and controls the vehicle to act through an intelligent software system, so that the vehicle can run on a road more safely and reliably.
In the state of the art and due to restrictions in network transmission speed, the vehicle cannot be driven completely automatically, and many research institutes have first implemented Advanced Driver Assistance Systems (ADAS) for automobiles. The ADAS integrates the perception of the surrounding environment of the vehicle and the control of the vehicle, and realizes some basic functions of intelligent driving, specifically including the functions of automatic cruising, automatic collision prevention of the vehicle, lane departure early warning, automatic parking and the like. In many sensors of a vehicle body, a camera is widely used because it is inexpensive and contains rich color information, and is mainly used for detecting information such as lane lines, vehicles, road signs, pedestrians, and the like around a vehicle.
The current lane line identification algorithm mostly uses a Hough operator method, a linear equation fitting method and the like. The method has good practicability and can fit the position of the lane line on a straight line road, but the Hough operator method is easy to lose information of a curved road, and the linear equation fitting method needs good lane line edge characteristics as a premise and is easy to be influenced by noise due to no global constraint.
Disclosure of Invention
The invention aims to provide a rapid lane line detection method based on continuous video frame corner feature matching, which is used for solving the problems of lack of curvature information and higher complexity of the existing lane line identification method,
the invention discloses a rapid lane line detection method based on continuous video frame corner feature matching, which comprises the following steps: step 1, calculating an image distortion matrix; step 2, image acquisition and image correction are carried out; step 3, using a corner detection and matching method to define a dynamic region of interest, comprising: if the lane line identification result of the previous frame is failure, selecting the region of interest as the global region, and skipping the step 3; calculating the coordinate values of the feature point pairs of the front and rear video frames; under the condition that the lane line recognition result of the previous frame of image is true, performing optical flow estimation on points with obvious angular point characteristics in the previous frame of image, and solving coordinate values of characteristic points of the previous and next video frames by adopting a least square solution; estimating the position of a lane line in the image of the frame, and selecting and identifying an interested area; step 4, carrying out image binarization processing in the region of interest to obtain a binarized image; step 5, carrying out perspective transformation on the image to obtain a top view; step 6, searching lane line pixel points on the top view, and performing lane line fitting by using a polynomial equation; step 7, counting lane line curvature and lane departure distance, and marking the lane line curvature and the lane departure distance into an original image; and 8, repeating the steps 2 to 6 until the image acquisition fails or a termination identification signal is received.
The invention provides a rapid lane line detection method based on continuous video frame corner point feature matching, wherein corner points, namely intersection points between outlines have the feature of stable property for the same scene even if the visual angle changes. The corner points can effectively reduce the data volume of the information while keeping the important characteristics of the image graph, so that the content of the information is high, the calculation speed is effectively improved, the reliable matching of the image is facilitated, and the real-time processing becomes possible.
Drawings
FIG. 1 is a flow chart of a method according to the present invention;
FIG. 2 is a distortion corrected image;
FIG. 3 is a grayed and binarized image;
FIG. 4 is a perspective transformed image;
FIG. 5 is an image after lane line fitting;
FIG. 6 is an annotated image.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
FIG. 1 is a flow chart of a method according to the present invention; FIG. 2 is a distortion corrected image; FIG. 3 is a grayed and binarized image; FIG. 4 is a perspective transformed image; FIG. 5 is an image after lane line fitting; fig. 6 is an annotated image, and as shown in fig. 1 to 6, the method for detecting a fast lane line based on feature matching of corner points of continuous video frames of the present invention includes the following steps:
step 1, calculating an image distortion matrix
Distortion is introduced due to variations in the camera lens during manufacturing and assembly, resulting in distortion of the original image captured. Therefore, during the use process, the acquired image needs to be corrected in a calibration mode.
Step 1.1, shooting images by using an interface provided by a camera drive, and shooting chessboard calibration images from 20 different angles
Step 1.2, graying the image by using a weighted average method
And step 1.3, searching the inner corner points of the calibration plate in the gray level image and calculating a distortion matrix by using a Zhang-Zhengyou calibration method.
Step 2, image acquisition and image correction
At the beginning of each frame of image processing period, it is first determined whether the current processing period is the primary processing, and if the current processing period is the primary planning, the lane line identification result of the previous frame is marked as False.
An interface provided by a camera drive is called to collect an image in front of the vehicle, and a distortion matrix is used to perform distortion correction on the collected image to obtain a corrected image, as shown in fig. 2.
Step 3, using the corner point detection and matching method to define the dynamic Region of interest (ROI)
Step 3.1, if the lane line identification result of the previous frame is False, the region of interest is selected as the global region, and the step 3 is skipped
Step 3.2, the coordinate values of the characteristic point pairs of the front and the rear video frames are obtained
And under the condition that the lane line recognition result of the previous frame of image is True, performing optical flow estimation on the points with obvious angular point features in the previous frame of image by using an L-K method, wherein the equation of the optical flow estimation is an overdetermined linear equation, and multiple groups of coordinate values of the feature point pairs of the previous and next video frames are obtained by adopting a least square solution.
Step 3.3, the position of the lane line in the image of the frame is estimated, and the region of interest is selected and identified
Step 3.3.1, dividing the coordinate values of the characteristic point pairs into a left group and a right group according to the position of the coordinate in the image relative to a central vertical line
Step 3.3.2, removing foreground pixels
And (3) aiming at the left and right groups of coordinate points, respectively using a DBSCAN clustering algorithm to the coordinate point displacement vectors, and eliminating foreground pixels to obtain main background coordinate points.
Step 3.3.3, solving the relative displacement vector of the front background image by using a weighted mean method
Step 3.3.4, setting the region of interest
Using the derived displacement vector
Figure BDA0002564478640000051
Respectively calculating estimated curves of the left lane and the right lane, and setting a sliding window with the left width b (b being 50) of the estimated curves as an interested area.
Step 4, carrying out image binarization processing in the region of interest to obtain a binarized image
Step 4.1, carrying out gray processing on the image by using a composite operator
Wherein the Sobel operator calculation method comprises
Figure BDA0002564478640000052
Figure BDA0002564478640000053
Where Gx and Gy are the horizontal and vertical edge detected images respectively,
Figure BDA0002564478640000054
for the planar convolution operation, A is the original image.
The formula for calculating the amplitude and gradient direction of the Sobel operator is as follows:
Figure BDA0002564478640000055
Figure BDA0002564478640000056
wherein G and theta are respectively the amplitude and the gradient direction corresponding to the pixel points.
Step 4.1.1, graying the image by using a Sobel operator in the horizontal direction, namely obtaining a Gx corresponding image
Step 4.1.2, graying the image by using a Sobel operator on the amplitude value, namely obtaining the image corresponding to G
Step 4.1.3, performing graying on the image by using a Sobel operator in the gradient direction to obtain a theta corresponding image
Step 4.1.4, taking image saturation channel component
The Hue (Hue), Saturation (Saturation), and luminance (brightness) channel components of an image in the HSL color space are separated, and the Saturation channel component is taken.
And 4.2, averaging the four gray level images to obtain a gray level image of the composite operator, as shown in FIG. 3.
And 5, carrying out perspective transformation on the image to obtain a top view. As shown in fig. 4.
Step 6, finding the lane line pixel points on the top view, and carrying out lane line fitting by using a polynomial equation
Step 6.1, searching the pixel points of the lane lines
Step 6.1.1, calculating the histogram of the image at the lower half part, and counting the peak positions of the histogram at the left side and the right side
Step 6.1.2, segmenting the image
The image is horizontally divided into 9 equal parts, and a rectangular sliding window with the same height of two slices and the width of 200 pixels is used in the bottom slice to cover the left and right peak positions of the histogram.
Step 6.1.3, searching for lane line pixel points
And moving the sliding window from bottom to top, sequentially searching the lane line pixel points in each slice, and repositioning the center of the sliding rectangle in the upper slice.
Step 6.2, second order polynomial fitting Using least squares
And respectively carrying out second-order polynomial fitting on the left and right groups of lane line pixel points by using a least square method to obtain a lane line equation under perspective transformation. Equation of lane line
Figure BDA0002564478640000061
The formula of (1) is:
Figure BDA0002564478640000062
wherein the polynomial coefficient a0、a1、a2Is calculated as
Figure BDA0002564478640000063
Wherein x isiAnd yiAnd 6.1.3, the horizontal and vertical coordinates of the ith group of lane line pixel points searched in the step.
The position of the fitted lane line in the image is shown in fig. 5.
Step 6.3, counting the pixel point parameters of the lane lines and marking the recognition status bits of the lane lines
When the number of pixels is less than the set threshold value 200 or the curvature and the range of the fitted curve exceed the threshold value range, the detection is considered to be failed, the lane line identification state bit is marked as False (the next frame uses global detection through marking), and the detection result of the previous frame of video is used in the current frame; otherwise, marking the lane line identification state bit as True.
Step 7, counting the curvature of the lane line and the lane departure distance, and marking the information into the original image
And 7.1, calculating the curvature of the lane, and converting the unit into meters according to the corresponding relation.
And 7.2, calculating the lane departure distance, and converting the unit into meters according to the corresponding relation.
And 7.3, information labeling.
And marking left and right lane lines on a blank image which is as high as the image after perspective transformation and has the same width as the image, and greening the middle area of the blank image. And fusing the image after inverse perspective transformation with the original image, and finally marking the curvature of the lane line and the lane departure distance to the upper left part of the image by characters. The annotated image is shown in fig. 6.
Step 8, repeating the steps 2 to 6 until the image acquisition fails or the termination identification signal is received
The invention provides another embodiment of a rapid lane line detection method based on continuous video frame corner feature matching, which comprises the following steps:
step 1, calculating an image distortion matrix
Distortion is introduced due to variations in the camera lens during manufacturing and assembly, resulting in distortion of the original image. Therefore, during the use process, the acquired image needs to be corrected.
Step 1.1, shooting chessboard calibration images from 20 different angles by using an interface provided by a camera drive
Step 1.2, graying the image by using a weighted average method
f(i,j)=0.3*R(i,j)+0.59*G(i,j)+0.11*B(i,j)
Wherein f (i, j) is the gray value corresponding to the pixel point with the coordinate (i, j) after graying, and R (i, j), G (i, j), and B (i, j) are the red, green, and blue channel components of the pixel point with the coordinate (i, j) in the color image, respectively.
Step 1.3, searching inner angular points of a calibration plate in the gray level image and calculating a distortion matrix by using a Zhang-Yongyou calibration method
Step 2, image acquisition and image correction
At the beginning of each frame of image processing period, it is first determined whether the current processing period is the primary processing, and if the current processing period is the primary planning, the lane line identification result of the previous frame is marked as False.
An interface provided by a camera drive is called to collect an image in front of the vehicle, and a distortion matrix is used to perform distortion correction on the collected image to obtain a corrected image, as shown in fig. 2.
Step 3, using the corner point detection and matching method to define the dynamic Region of interest (ROI)
Step 3.1, if the lane line identification result of the previous frame is False, the region of interest is selected as the global region, and the step is skipped
Step 3.2, carry on the light stream estimation, solve the least square solution
Under the condition that the result of recognizing the lane lines in the previous frame of image is True, performing optical flow estimation on points with remarkable angular point characteristics in the previous frame of image by using an L-K method, wherein the equation of the optical flow estimation is an overdetermined linear equation and adopts least square solution as follows:
Figure BDA0002564478640000081
wherein u and v are relative displacement coordinates of the feature points between the video frames, and the calculation method of each symbol is as follows:
Figure BDA0002564478640000082
Figure BDA0002564478640000083
after calculation, a plurality of groups of coordinate values of the feature point pairs of the front and rear video frames are obtained.
Step 3.3, the position of the lane line in the image of the frame is estimated, and the region of interest is selected and identified
Step 3.3.1, dividing the coordinate values of the characteristic point pairs into a left group and a right group according to the position of the coordinate in the image relative to a central vertical line
Step 3.3.2, removing foreground pixels
And (3) aiming at the left and right groups of coordinate points, respectively using a DBSCAN clustering algorithm to the coordinate point displacement vectors, and eliminating foreground pixels to obtain main background coordinate points.
Step 3.3.3, solving the relative displacement vector of the front background image
Using a weighted mean method to solve the relative displacement vector of the front background image, wherein the calculation formula is as follows:
Figure BDA0002564478640000091
wherein
Figure BDA0002564478640000092
The obtained relative displacement vector of the front background image is the relative displacement vector of the ith group of characteristic pixel points, and the weight is the reciprocal distance of the ith characteristic pixel point and a lane fitting curve.
Step 3.3.4, setting the region of interest
Using the derived displacement vector
Figure BDA0002564478640000093
Respectively calculating estimated curves of the left lane and the right lane, and setting a sliding window with the left width b (b being 50) of the estimated curves as an interested area.
Step 4, carrying out image binarization processing in the region of interest to obtain a binarized image
Step 4.1, carrying out gray processing on the image by using a composite operator
Wherein the Sobel operator calculation method comprises
Figure BDA0002564478640000094
Figure BDA0002564478640000095
Where Gx and Gy are the horizontal and vertical edge detected images respectively,
Figure BDA0002564478640000096
for the planar convolution operation, A is the original image.
The formula for calculating the amplitude and gradient direction of the Sobel operator is as follows:
Figure BDA0002564478640000097
Figure BDA0002564478640000098
wherein G and theta are respectively the amplitude and the gradient direction corresponding to the pixel points.
Step 4.1.1, graying the image by using a Sobel operator in the horizontal direction, namely obtaining a Gx corresponding image
Step 4.1.2, graying the image by using a Sobel operator on the amplitude value, namely obtaining the image corresponding to G
Step 4.1.3, performing graying on the image by using a Sobel operator in the gradient direction to obtain a theta corresponding image
Step 4.1.4, separating image saturation channel components
The Hue (Hue), Saturation (Saturation), and luminance (brightness) channel components of an image in the HSL color space are separated, and the Saturation channel component is taken.
Step 4.2, averaging the four gray level images to obtain a gray level image of the composite operator, as shown in fig. 3
And 5, carrying out perspective transformation on the image to obtain a top view. As shown in fig. 4
Step 6, finding the lane line pixel points on the top view, and carrying out lane line fitting by using a polynomial equation
Step 6.1, searching the pixel points of the lane lines
Step 6.1.1, calculating the histogram of the image at the lower half part, and counting the peak positions of the histogram at the left side and the right side
Step 6.1.2, horizontally dividing the image into 9 equal parts, using a rectangular sliding window with two slices being equal in height and 200 pixels in width in the bottom slice, and covering the left and right peak positions of the histogram
Step 6.1.3, moving the sliding window from bottom to top, sequentially searching the lane line pixel points in each slice, and repositioning the center of the sliding rectangle in the upper slice
Step 6.2, performing second-order polynomial fitting
Respectively carrying out second-order polynomial fitting on the left and right groups of lane line pixel points by using a least square method to obtain a lane line equation under perspective transformation
Figure BDA0002564478640000101
Is of the formula
Figure BDA0002564478640000102
Wherein the polynomial coefficient a0、a1、a2Is calculated as
Figure BDA0002564478640000111
Wherein x isiAnd yiAnd 6.1.3, the horizontal and vertical coordinates of the ith group of lane line pixel points searched in the step.
The position of the fitted lane line in the image is shown in fig. 5.
Step 6.3, counting the pixel point parameters of the lane lines and marking the recognition status bits of the lane lines
When the number of pixels is less than the set threshold value 200 or the curvature and the range of the fitted curve exceed the threshold value range, the detection is considered to be failed, the lane line identification state bit is marked as False (the next frame uses global detection through marking), and the detection result of the previous frame of video is used in the current frame; otherwise, marking the lane line identification state bit as True.
Step 7, counting the curvature of the lane line and the lane departure distance, and marking the information into the original image
Step 7.1, calculating lane curvature
Respectively counting the curvature mean value of the lane line corresponding to each vertical coordinate in the next half image according to a fitting equation of the left lane line and the right lane line, converting a unit into meters according to the corresponding relation of 3.7 meters/700 pixels in the horizontal direction and 30 meters/720 pixels in the vertical direction, and calculating the mean value of the left average curvature and the right average curvature.
Step 7.2, calculating the lane departure distance
And according to a fitting equation of the left lane line and the right lane line, counting the difference value between the central position of the left lane line and the central position of the right lane line corresponding to each vertical coordinate pixel in the lower half image and the transverse center of the image, and converting the unit into meters according to the corresponding relation of 3.7 meters/700 pixels in the horizontal direction.
Step 7.3, information labeling
And marking left and right lane lines on a blank image which is as high as the image after perspective transformation and has the same width as the image, and greening the middle area of the blank image. And fusing the image after inverse perspective transformation with the original image, and finally marking the curvature of the lane line and the lane departure distance to the upper left part of the image by characters. The annotated image is shown in fig. 6.
Step 8, repeating the steps 2 to 6 until the image acquisition fails or the termination identification signal is received
The invention provides a rapid lane line detection method based on continuous video frame corner feature matching, which is mainly used for lane line identification application in auxiliary driving and automatic driving. The method detects the relative movement of the background in front of the vehicle by introducing a continuous video frame corner matching method, and dynamically pre-estimates the range of the lane line according to the time correlation between video frames to reduce the Region of interest (ROI). Then, the image is subjected to combined graying and binarization processing. And finally, carrying out perspective transformation on the image, searching the lane line pixel points and carrying out lane line fitting by using a second-order polynomial equation.
Aiming at the problems that the existing lane line identification method is lack of curvature information and high in complexity, the invention provides a rapid lane line detection method based on continuous video frame corner point feature matching, wherein corner points, namely intersection points between outlines have the feature of stable property for the same scene even if the visual angle changes. The corner points can effectively reduce the data volume of the information while keeping the important characteristics of the image graph, so that the content of the information is high, the calculation speed is effectively improved, the reliable matching of the image is facilitated, and the real-time processing becomes possible.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A rapid lane line detection method based on continuous video frame corner feature matching comprises the following steps:
step 1, calculating an image distortion matrix;
step 2, image acquisition and image correction are carried out;
step 3, using a corner detection and matching method to define a dynamic region of interest, comprising:
if the lane line identification result of the previous frame is failure, selecting the region of interest as the global region, and skipping the step 3;
calculating the coordinate values of the feature point pairs of the front and rear video frames;
under the condition that the lane line recognition result of the previous frame of image is true, performing optical flow estimation on points with obvious angular point characteristics in the previous frame of image, and solving coordinate values of characteristic points of the previous and next video frames by adopting a least square solution;
estimating the position of a lane line in the image of the frame, and selecting and identifying an interested area;
step 4, carrying out image binarization processing in the region of interest to obtain a binarized image;
step 5, carrying out perspective transformation on the image to obtain a top view;
step 6, searching lane line pixel points on the top view, and performing lane line fitting by using a polynomial equation;
step 7, counting lane line curvature and lane departure distance, and marking the lane line curvature and the lane departure distance into an original image;
and 8, repeating the steps 2 to 6 until the image acquisition fails or a termination identification signal is received.
2. The fast lane-line detection method based on corner feature matching of continuous video frames as claimed in claim 1, wherein the step 1, calculating the image distortion matrix comprises:
shooting images by using an interface provided by a camera drive, and shooting chessboard calibration images from 20 different angles;
graying the image by using a weighted average method;
and searching the inner angular points of the calibration plate in the gray-scale image and calculating a distortion matrix by using a Zhang-friend calibration method.
3. The fast lane line detection method based on corner feature matching of continuous video frames as claimed in claim 1, wherein the step 2 of image acquisition and image correction comprises:
starting each frame of image processing period, determining whether the current processing period is primary processing, and if the current processing period is primary planning, marking the lane line identification result of the previous frame as failure;
and calling an interface provided by a camera drive to acquire an image in front of the vehicle, and performing distortion correction on the acquired image by using a distortion matrix to obtain a corrected image.
4. The method of claim 1, wherein the removing foreground pixels comprises:
and (3) for the left and right groups of coordinate points, respectively using a DBSCAN clustering algorithm for the coordinate point displacement vectors, and eliminating foreground pixels to obtain main background coordinate points.
5. The method as claimed in claim 1, wherein the step of predicting the position of the lane line in the current frame image and the step of selecting the region of interest includes:
dividing the coordinate values of the feature point pairs into a left group and a right group according to the position of the coordinate in the image relative to a middle vertical line;
removing foreground pixels to obtain main background coordinate points;
solving a relative displacement vector of the front background image by using a weighted mean method;
setting a region of interest;
using the derived displacement vector
Figure FDA0002564478630000021
Respectively calculating the pre-estimated curves of the left lane and the right lane, setting the left width and the right width of the pre-estimated curves as b, and setting a sliding window as an interested area.
6. The fast lane-line detection method based on corner feature matching of continuous video frames as claimed in claim 1, wherein step 4 comprises:
the calculation method of the Sobel operator comprises the following steps:
Figure FDA0002564478630000022
Figure FDA0002564478630000031
where Gx and Gy are horizontal and vertical edge detected images respectively,
Figure FDA0002564478630000032
a is a plane convolution operation, and A is an original image;
the formula for calculating the amplitude and gradient direction of the Sobel operator is as follows:
Figure FDA0002564478630000033
Figure FDA0002564478630000034
g and theta are respectively the amplitude and the gradient direction corresponding to the pixel points;
graying the image by using a Sobel operator in the horizontal direction to obtain a Gx corresponding image;
graying the image by using a Sobel operator on the amplitude value to obtain a G corresponding image;
graying the image by using Sobel operator in the gradient direction, namely obtaining a theta corresponding image
Taking an image saturation channel component;
and obtaining a gray level image of the composite operator.
7. The fast lane line detection method based on feature matching of corner points of continuous video frames as claimed in claim 1, wherein step 6 specifically comprises:
find the lane line pixel, include:
calculating a histogram of the image at the lower half part, and counting the peak positions of the histogram at the left side and the right side;
dividing the image into 9 equal parts horizontally, using two rectangular sliding windows with the same height and 200 pixel width in the bottom slice, and covering the left and right peak positions of the histogram;
moving the sliding window from bottom to top, sequentially searching lane line pixel points in each slice, and repositioning the center of a sliding rectangle in the upper slice;
performing second-order polynomial fitting by using a least square method;
counting the pixel point parameters of the lane lines, marking the recognition status bits of the lane lines as failure when the pixel points are less than a set threshold value 200 or the curvature and the range of a fitting curve exceed the range of the threshold value, and using the detection result of the previous frame of video in the current frame; otherwise, marking the lane line identification state bit as successful.
8. The fast lane line detection method based on feature matching of corner points of continuous video frames as claimed in claim 1, wherein step 7 specifically comprises:
calculating lane curvature, and converting the unit into meter according to the corresponding relation;
calculating lane departure distance, and converting the unit into meters according to the corresponding relation;
marking left and right lane lines on a blank image which has the same height and the same width with the image after perspective transformation, greening the middle area of the blank image, fusing the image with the original image after inverse perspective transformation, and marking the curvature of the lane lines and the lane departure distance on the image by characters.
9. The fast lane line detection method based on corner feature matching of continuous video frames as claimed in claim 7, wherein the performing of second order polynomial fitting using least squares method comprises:
respectively carrying out second-order polynomial fitting on the left and right groups of lane line pixel points by using a least square method to obtain a lane line equation under perspective transformation
Figure FDA0002564478630000041
The formula of (1) is:
Figure FDA0002564478630000042
wherein the polynomial coefficient a0、a1And a2The calculation equation of (a) is:
Figure FDA0002564478630000043
wherein x isiAnd yiThe horizontal and vertical coordinates of the searched ith group of lane line pixel points in the lane line pixel points in each slice are searched.
10. The fast lane detection method based on corner feature matching of successive video frames as claimed in claim 1,
step 3, optical flow estimation is carried out, and the least square solution solving comprises the following steps:
Figure FDA0002564478630000051
wherein u and v are relative displacement coordinates of the feature points between the video frames, and the calculation method of each symbol is as follows:
Figure FDA0002564478630000052
Figure FDA0002564478630000053
after calculation, a plurality of groups of characteristic point pair coordinate values of the front and rear video frames are obtained.
CN202010625087.7A 2020-07-01 2020-07-01 Rapid lane line detection method based on continuous video frame corner feature matching Pending CN111783666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010625087.7A CN111783666A (en) 2020-07-01 2020-07-01 Rapid lane line detection method based on continuous video frame corner feature matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010625087.7A CN111783666A (en) 2020-07-01 2020-07-01 Rapid lane line detection method based on continuous video frame corner feature matching

Publications (1)

Publication Number Publication Date
CN111783666A true CN111783666A (en) 2020-10-16

Family

ID=72757800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010625087.7A Pending CN111783666A (en) 2020-07-01 2020-07-01 Rapid lane line detection method based on continuous video frame corner feature matching

Country Status (1)

Country Link
CN (1) CN111783666A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095283A (en) * 2021-04-30 2021-07-09 南京工程学院 Lane line extraction method based on dynamic ROI and improved firefly algorithm
CN113378719A (en) * 2021-06-11 2021-09-10 许杰 Lane line recognition method and device, computer equipment and storage medium
CN113505747A (en) * 2021-07-27 2021-10-15 浙江大华技术股份有限公司 Lane line recognition method and apparatus, storage medium, and electronic device
CN113591565A (en) * 2021-06-25 2021-11-02 江苏理工学院 Machine vision-based lane line detection method, detection system and detection device
CN115063761A (en) * 2022-05-19 2022-09-16 广州文远知行科技有限公司 Lane line detection method, device, equipment and storage medium
CN115116018A (en) * 2022-06-30 2022-09-27 北京旋极信息技术股份有限公司 Method and device for fitting lane line
CN117710795A (en) * 2024-02-06 2024-03-15 成都同步新创科技股份有限公司 Machine room line safety detection method and system based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
CN110647850A (en) * 2019-09-27 2020-01-03 福建农林大学 Automatic lane deviation measuring method based on inverse perspective principle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
CN110647850A (en) * 2019-09-27 2020-01-03 福建农林大学 Automatic lane deviation measuring method based on inverse perspective principle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庄博阳: "基于光流法的快速车道线识别算法研究", 《计算机测量与控制》, vol. 27, no. 9, pages 146 - 150 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095283A (en) * 2021-04-30 2021-07-09 南京工程学院 Lane line extraction method based on dynamic ROI and improved firefly algorithm
CN113095283B (en) * 2021-04-30 2023-08-25 南京工程学院 Lane line extraction method based on dynamic ROI and improved firefly algorithm
CN113378719A (en) * 2021-06-11 2021-09-10 许杰 Lane line recognition method and device, computer equipment and storage medium
CN113378719B (en) * 2021-06-11 2024-04-05 北京清维如风科技有限公司 Lane line identification method, lane line identification device, computer equipment and storage medium
CN113591565A (en) * 2021-06-25 2021-11-02 江苏理工学院 Machine vision-based lane line detection method, detection system and detection device
CN113591565B (en) * 2021-06-25 2023-07-18 江苏理工学院 Lane line detection method, detection system and detection device based on machine vision
CN113505747A (en) * 2021-07-27 2021-10-15 浙江大华技术股份有限公司 Lane line recognition method and apparatus, storage medium, and electronic device
CN115063761A (en) * 2022-05-19 2022-09-16 广州文远知行科技有限公司 Lane line detection method, device, equipment and storage medium
CN115116018A (en) * 2022-06-30 2022-09-27 北京旋极信息技术股份有限公司 Method and device for fitting lane line
CN117710795A (en) * 2024-02-06 2024-03-15 成都同步新创科技股份有限公司 Machine room line safety detection method and system based on deep learning
CN117710795B (en) * 2024-02-06 2024-06-07 成都同步新创科技股份有限公司 Machine room line safety detection method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN111783666A (en) Rapid lane line detection method based on continuous video frame corner feature matching
CN109190523B (en) Vehicle detection tracking early warning method based on vision
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
JP2917661B2 (en) Traffic flow measurement processing method and device
CN105005771B (en) A kind of detection method of the lane line solid line based on light stream locus of points statistics
CN108038416B (en) Lane line detection method and system
CN105741559B (en) A kind of illegal occupancy Emergency Vehicle Lane detection method based on track line model
CN101334836B (en) License plate positioning method incorporating color, size and texture characteristic
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN106022243B (en) A kind of retrograde recognition methods of the car lane vehicle based on image procossing
CN108052904B (en) Method and device for acquiring lane line
CN103235938A (en) Method and system for detecting and identifying license plate
CN109948552B (en) Method for detecting lane line in complex traffic environment
CN107563330B (en) Horizontal inclined license plate correction method in surveillance video
CN104778444A (en) Method for analyzing apparent characteristic of vehicle image in road scene
CN105654073A (en) Automatic speed control method based on visual detection
CN114898296A (en) Bus lane occupation detection method based on millimeter wave radar and vision fusion
CN106503640A (en) A kind of detection method for taking bus zone
CN106919939B (en) A kind of traffic signboard tracks and identifies method and system
CN109241920A (en) A kind of method for detecting lane lines for vehicle mounted road monitoring evidence-obtaining system
CN108284793A (en) A kind of vehicle sub-controlling unit
CN101369312B (en) Method and equipment for detecting intersection in image
CN112651293A (en) Video detection method for road illegal stall setting event
CN106557754A (en) A kind of vehicle detection at night and state judging method
WO2022142827A1 (en) Road occupancy information determination method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201016