Lane line detection method based on OpenCV
Technical Field
The invention relates to a lane line detection method, in particular to a lane line detection method based on OpenCV, and belongs to the technical field of artificial intelligence.
Background
With the increasing of the vehicle retention rate and the continuous development of the automobile industry all over the world, the traffic pressure of various cities is rapidly increased at present, and the traffic safety increasingly becomes a hot spot of global attention.
In the background of such times, Advanced Driving Assistance Systems (ADAS) have been produced and have gained wide attention in various fields. Specifically, the ADAS mainly uses various sensors to acquire information inside and outside the vehicle, and after various calculations and processes, the ADAS makes the driver aware of the danger that may occur through alarming, so as to reduce the occurrence rate of traffic accidents. However, in the current technical level, due to the limitation of the image recognition processing level, various ADAS often have problems in lane line detection, and are difficult to have performance of long-time and long-distance stable operation, so that the possibility of applying the system in a specific environment is limited.
In recent years, many studies on lane line detection methods have been made at home and abroad, and these studies are mainly classified into two types, feature-based and model-based. The Pueraria shurica et al proposes that the contrast between a lane line and a road is increased by adjusting the brightness, gain and exposure time of a CCD, then selection and classification of seed points are continuously carried out on an image, Hough transformation is carried out on the seed points, and finally the lane line is extracted through angle constraint; maling et al propose to use the projection transformation method to convert the original image into a top view, and then use the circular curve lane model and Hough transformation based on density to recognize the lane; the Bischhao et al propose that when identifying the road surface interesting area, the boundary tracking detection algorithm based on fuzzy clustering is adopted to realize the identification of the lane lines.
However, the skilled person finds that the above methods have low real-time performance and accuracy, and cannot be widely accepted as lane line detection methods in the industry. In summary, how to provide a brand new lane line detection method based on the prior art to overcome various deficiencies in the prior art becomes a problem to be solved by the technical staff in the field.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a lane line detection method based on OpenCV, which is as follows.
An OpenCV-based lane line detection method comprises the following steps:
s1, preprocessing images, namely preprocessing the original lane line images obtained by the vehicle-mounted camera to obtain preprocessed lane line images;
s2, extracting image edges, namely performing edge extraction processing on the preprocessed lane line image by adopting Canny to obtain the lane line image after edge extraction;
and S3, Hough transformation processing, namely Hough transformation is carried out on the lane line image after the edge extraction by using an improved Hough transformation mode, and lane line detection is completed.
Preferably, the image preprocessing of S1 includes the following steps:
s11, extracting an image ROI, and cutting and selecting an interested area of the image shot by the vehicle-mounted camera by adopting a cvSetImageROI function in OpenCV;
s12, performing image graying, converting the original lane line image of the color three-channel RGB, establishing correspondence of brightness H and R, G, B three colors, expressing the gray value of each pixel point in the image by using the H brightness value, and completing the grayscale processing of the whole original lane line image by adopting a cvCvtColor function in OpenCV to obtain the lane line image of a single channel HSV;
s13, performing image noise reduction and threshold segmentation, eliminating noise in the lane line image by adopting a media _ SortNet function in OpenCV, and performing threshold segmentation on the lane line image by adopting an OTSU algorithm to obtain a preprocessed lane line image.
Preferably, the image edge extraction of S2 includes the following steps:
s21, smoothing the image by using a Gaussian filter, carrying out weighted average on the lane line image, wherein the value of any pixel point in the image is obtained by carrying out weighted average on the value of the pixel point and the values of other pixel points in the neighborhood,
wherein the content of the first and second substances,
in order to be a probability,
is a value of the radius of the Gaussian,
;
s22, determining the gradient amplitude and direction, and performing edge detection calculation in the horizontal and vertical directions by using a cable operator in combination with the lane line image to obtain the gradient amplitude and gradient direction of the corresponding image;
s23, image edge quantization, namely performing non-maximum suppression on the gradient amplitude along the gradient direction to finish the image edge quantization;
s24, edge thinning, namely selecting any one pixel point on the lane line image, comparing the center of the field of the pixel point with two adjacent pixel points in the gradient direction, if the center pixel of the pixel point is the maximum value, reserving the pixel point, and if not, setting the center pixel of the pixel point to be 0;
s25, edge connection, namely, detecting and connecting edges by using a dual-threshold algorithm, selecting two coefficients as thresholds, wherein one coefficient is a high threshold TH, the other coefficient is a low threshold TL, TH =0.2 and TL =0.1 are taken, and marking pixel points smaller than the thresholds as 0 and discarding the pixel points; points greater than the threshold are marked as 1.
Preferably, the determining the gradient magnitude and direction in S22 includes the following steps:
s221, carrying out convolution calculation by using a cable operator and combining with the lane line image, carrying out convolution calculation on dx and dy by using cable horizontal and vertical operators and the input image, wherein the calculation formula is as follows,
wherein the content of the first and second substances,
in order to be the table horizontal operator, the method comprises the following steps of,
is a Soble vertical operator;
s222, further calculating to obtain the gradient amplitude of the corresponding image, wherein the calculation formula is as follows,
s223, determining to obtain the gradient direction of the corresponding image according to the gradient amplitude, wherein the angle expression of the gradient direction is,
preferably, the image edge is quantized at S23, and the quantization criteria are as follows:
the horizontal edges of the image are quantized, with a quantization scale of,
θM∈[0,22.5)∪(-22.5·0)∪(157.5,180]∪(-180,157.5];
the 135 deg. edge of the image is quantized, with the quantization scale,
θM∈[22.5,67.5)∪[-157.5,-112.5);
the vertical edges of the image are quantized, with the quantization criterion,
θM∈[67.5,112.5]∪[-112.5,-67.5];
the 45 deg. edges of the image are quantized, with the quantization scale,
θM∈[112.5,157.5]∪[-67.5,-22.5]。
preferably, in the Hough transform processing of S3, the improved Hough transform mode is: and randomly selecting edge points in the binary distribution map to perform Hough transformation.
Preferably, the Hough transform process of S3 includes the following steps:
s31, randomly selecting edge points from the lane line image after edge extraction, and if the edge points are marked as points on a certain straight line, randomly extracting one edge point from the rest edge points until all the edge points are extracted;
s32, carrying out Hough transformation on the extracted edge points, and then carrying out accumulation and calculation;
s33, selecting a point with the maximum accumulated result, if the accumulated result is higher than the set high threshold value TH, performing the next step, otherwise, returning to S31;
and S34, taking the selected point in S33 as a starting point, finding out two end points of the straight line along the direction of the straight line displacement, then calculating the length of the line segment, if the length is greater than the set high threshold value TH, considering the line segment as a lane line, and then returning to S31.
The advantages of the invention are mainly embodied in the following aspects:
the lane line detection method based on the OpenCV provided by the invention is based on the OpenCV, well completes detection and identification of dotted lines in roads and implementation, greatly overcomes the defects of various existing detection methods in the aspects of real-time performance and accuracy, effectively realizes real-time detection and accurate identification of lane lines in a vehicle driving environment, meets the increasing intelligent identification requirement of roads, and lays a solid foundation for improving the road management efficiency and ensuring the smooth operation of urban traffic.
In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to other technical schemes related to lane line detection in the same field, and has very wide application prospect.
The following detailed description of the embodiments of the present invention is provided in connection with the accompanying drawings for the purpose of facilitating understanding and understanding of the technical solutions of the present invention.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram showing a comparison of lane line images before and after ROI extraction;
FIG. 3 is a schematic illustration of a processed lane line image;
FIG. 4 is a schematic diagram of a lane line image after edge extraction;
FIG. 5 is a schematic diagram of a lane line image processed by using an improved Hough transform;
fig. 6 is a schematic diagram of the recognition effect of the present invention.
Detailed Description
The invention provides an OpenCV-based lane line detection method, which well completes detection and identification of dotted lines in a road and implementation based on OpenCV, and greatly improves the defects of the existing various detection methods in the aspects of real-time performance and accuracy.
As shown in fig. 1, a lane line detection method based on OpenCV includes the following steps:
and S1, preprocessing the image, namely preprocessing the original lane line image obtained by the vehicle-mounted camera to obtain a preprocessed lane line image.
The detailed operation of this step is as follows.
And S11, extracting image ROI. The lane marking picture obtained by the vehicle-mounted camera is generally an image with 616 × 808 pixels, has higher resolution, and includes a large amount of image data, such as non-lane marking objects like sky, trees, buildings, vehicles, and the like. In order to reduce the time consumption of lane line detection and improve the accuracy and efficiency of lane line detection, a region to be processed needs to be outlined in a frame mode from a processed image, that is, a cvSetImageROI function in OpenCV is used to cut and select a region of interest (ROI) of a picture shot by a vehicle-mounted camera, and the result is shown in fig. 2.
And S12, performing image graying processing. The image shot by the vehicle-mounted camera is a picture of a color RGB (red, green and blue) color model, but because the RGB model picture is a color space which depends on hardware and devices, the described color is not completely intuitive and is not uniform visually. And because the lane line in the gray level image is greatly different from the road surface background, the system can quickly and completely extract the lane line, so that the original lane line image of the color three-channel RGB needs to be converted.
The weighted average method is used here, the correspondence of the brightness H and R, G, B three colors is established according to the change relation of RGB and HSV color space and the proportion distribution of the photosensitive intensity of three different photosensitive cells in human eyes, the gray value of each pixel point in the image is expressed by the H brightness value, the gray processing of the whole original lane line image is completed by adopting the cvCvtColor function in OpenCV, and the lane line image of a single channel HSV is obtained.
S13, image noise reduction and threshold segmentation, wherein median filtering in nonlinear smooth filtering in the spatial filtering technology, namely, a media _ SortNet function in OpenCV is adopted to eliminate noise in the lane line image, so that the boundary is not blurred while the noise is eliminated. In order to simplify the post-processing steps and reduce the calculation amount, the OTSU algorithm is adopted to perform threshold segmentation on the lane line image, the value is 160, and the preprocessed lane line image is obtained. The processed lane line image is shown in fig. 3.
And S2, extracting image edges, and performing edge extraction processing on the preprocessed lane line image by adopting Canny to obtain the lane line image after edge extraction.
The detailed operation of this step is as follows.
S21, smoothing the image by using a Gaussian filter, carrying out weighted average on the lane line image, wherein the value of any pixel point in the image is obtained by carrying out weighted average on the value of the pixel point and the values of other pixel points in the neighborhood, the one-dimensional Gaussian distribution formula is adopted to realize the image smoothing, the calculation formula is as follows,
wherein the content of the first and second substances,
in order to be a probability,
is a value of the radius of the Gaussian,
。
s22, determining the gradient amplitude and direction, and using a cable operator to perform edge detection calculation in the horizontal and vertical directions by combining with the lane line image to obtain the gradient amplitude and gradient direction of the corresponding image, specifically, including,
s221, calculating the gradient amplitude and direction by using a first-order partial derivative finite difference, performing convolution calculation by using a cable operator and combining with the lane line image, wherein the calculation formula is as follows,
wherein the content of the first and second substances,
in order to be the table horizontal operator, the method comprises the following steps of,
is a Soble vertical operator.
S222, further calculating to obtain the gradient amplitude of the corresponding image, wherein the calculation formula is as follows,
to simplify the calculation, the gradient magnitudes can be approximated as follows,
s223, determining to obtain the gradient direction of the corresponding image according to the gradient amplitude, wherein the angle expression of the gradient direction is,
and S23, image edge quantization, namely performing non-maximum suppression on the gradient amplitude along the gradient direction to finish the image edge quantization.
The quantization standard is as follows,
when the horizontal edge of the image is quantized, i.e. the gradient direction is vertical, the quantization scale is,
θM∈[0,22.5)∪(-22.5·0)∪(157.5,180]∪(-180,157.5];
at 135 deg. edge quantization of the image, i.e. 45 deg. gradient direction, the quantization criterion is,
θM∈[22.5,67.5)∪[-157.5,-112.5);
when the vertical edge of the image is quantized, i.e. the gradient direction is horizontal, the quantization criterion is,
θM∈[67.5,112.5]∪[-112.5,-67.5];
at 45 deg. edge quantization of the image, i.e. a gradient direction of 135 deg., the quantization criterion is,
θM∈[112.5,157.5]∪[-67.5,-22.5]。
s24, edge thinning, selecting any pixel point on the lane line image, comparing the center of the field with two adjacent pixel points along the gradient direction, if the center pixel of the pixel point is the maximum value, keeping, otherwise, setting the center pixel of the pixel point to 0, thus inhibiting the non-maximum value, keeping the point with the maximum local gradient, and obtaining the thinned edge.
S25, edge connection, namely, detecting and connecting edges by using a dual-threshold algorithm, selecting two coefficients as thresholds, wherein one coefficient is a high threshold TH, the other coefficient is a low threshold TL, TH =0.2 and TL =0.1 are taken, and marking pixel points smaller than the thresholds as 0 and discarding the pixel points; points greater than the threshold are marked as 1.
The image after edge extraction is shown in fig. 4.
And S3, Hough transformation processing, namely Hough transformation is carried out on the lane line image after the edge extraction by using an improved Hough transformation mode, and lane line detection is completed. The improved Hough transformation mode is that edge points are randomly selected in a binary distribution diagram to carry out Hough transformation.
The detailed operation of this step is as follows.
S31, randomly selecting edge points from the lane line image after edge extraction, and if the edge points are marked as points on a certain straight line, randomly extracting one edge point from the rest edge points until all the edge points are extracted;
s32, carrying out Hough transformation on the extracted edge points, and then carrying out accumulation and calculation;
s33, selecting a point with the maximum accumulated result, if the accumulated result is higher than the set high threshold value TH, performing the next step, otherwise, returning to S31;
and S34, taking the selected point in S33 as a starting point, finding out two end points of the straight line along the direction of the straight line displacement, then calculating the length of the line segment, if the length is greater than the set high threshold value TH, considering the line segment as a lane line, and then returning to S31.
In this embodiment, the Hough transform processing in S3 completes Hough transform of an image mainly according to a cv2.houghlinesp function in OpenCV, and the detected lane line is as shown in fig. 5. The final recognition effect of the scheme is shown in fig. 6.
The invention effectively realizes the real-time detection and accurate identification of the lane line under the driving environment of the vehicle, meets the increasing intelligent identification requirement of the highway, and lays a solid foundation for improving the management efficiency of the highway and ensuring the smooth operation of urban traffic.
In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to other technical schemes related to lane line detection in the same field, and has very wide application prospect.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.