CN111652033A - Lane line detection method based on OpenCV - Google Patents

Lane line detection method based on OpenCV Download PDF

Info

Publication number
CN111652033A
CN111652033A CN201911271789.3A CN201911271789A CN111652033A CN 111652033 A CN111652033 A CN 111652033A CN 201911271789 A CN201911271789 A CN 201911271789A CN 111652033 A CN111652033 A CN 111652033A
Authority
CN
China
Prior art keywords
image
lane line
edge
opencv
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911271789.3A
Other languages
Chinese (zh)
Inventor
王凤石
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aoyikesi Technology Co ltd
Original Assignee
Suzhou Aecs Automotive Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Aecs Automotive Electronics Co ltd filed Critical Suzhou Aecs Automotive Electronics Co ltd
Priority to CN201911271789.3A priority Critical patent/CN111652033A/en
Publication of CN111652033A publication Critical patent/CN111652033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an OpenCV-based lane line detection method, which comprises the following steps: s1, preprocessing the original lane line image obtained by the vehicle-mounted camera to obtain a preprocessed lane line image; s2, adopting Canny to carry out edge extraction processing on the lane line image to obtain the lane line image after edge extraction; and S3, carrying out Hough transformation on the lane line image by using an improved Hough transformation mode, and completing lane line detection. The method and the device well complete the detection and identification of the dotted lines in the road based on the OpenCV, greatly improve the defects of the existing various detection methods in the aspects of real-time performance and accuracy, and effectively realize the real-time detection and accurate identification of the lane lines in the vehicle driving environment.

Description

Lane line detection method based on OpenCV
Technical Field
The invention relates to a lane line detection method, in particular to a lane line detection method based on OpenCV, and belongs to the technical field of artificial intelligence.
Background
With the increasing of the vehicle retention rate and the continuous development of the automobile industry all over the world, the traffic pressure of various cities is rapidly increased at present, and the traffic safety increasingly becomes a hot spot of global attention.
In the background of such times, Advanced Driving Assistance Systems (ADAS) have been produced and have gained wide attention in various fields. Specifically, the ADAS mainly uses various sensors to acquire information inside and outside the vehicle, and after various calculations and processes, the ADAS makes the driver aware of the danger that may occur through alarming, so as to reduce the occurrence rate of traffic accidents. However, in the current technical level, due to the limitation of the image recognition processing level, various ADAS often have problems in lane line detection, and are difficult to have performance of long-time and long-distance stable operation, so that the possibility of applying the system in a specific environment is limited.
In recent years, many studies on lane line detection methods have been made at home and abroad, and these studies are mainly classified into two types, feature-based and model-based. The Pueraria shurica et al proposes that the contrast between a lane line and a road is increased by adjusting the brightness, gain and exposure time of a CCD, then selection and classification of seed points are continuously carried out on an image, Hough transformation is carried out on the seed points, and finally the lane line is extracted through angle constraint; maling et al propose to use the projection transformation method to convert the original image into a top view, and then use the circular curve lane model and Hough transformation based on density to recognize the lane; the Bischhao et al propose that when identifying the road surface interesting area, the boundary tracking detection algorithm based on fuzzy clustering is adopted to realize the identification of the lane lines.
However, the skilled person finds that the above methods have low real-time performance and accuracy, and cannot be widely accepted as lane line detection methods in the industry. In summary, how to provide a brand new lane line detection method based on the prior art to overcome various deficiencies in the prior art becomes a problem to be solved by the technical staff in the field.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a lane line detection method based on OpenCV, which is as follows.
An OpenCV-based lane line detection method comprises the following steps:
s1, preprocessing images, namely preprocessing the original lane line images obtained by the vehicle-mounted camera to obtain preprocessed lane line images;
s2, extracting image edges, namely performing edge extraction processing on the preprocessed lane line image by adopting Canny to obtain the lane line image after edge extraction;
and S3, Hough transformation processing, namely Hough transformation is carried out on the lane line image after the edge extraction by using an improved Hough transformation mode, and lane line detection is completed.
Preferably, the image preprocessing of S1 includes the following steps:
s11, extracting an image ROI, and cutting and selecting an interested area of the image shot by the vehicle-mounted camera by adopting a cvSetImageROI function in OpenCV;
s12, performing image graying, converting the original lane line image of the color three-channel RGB, establishing correspondence of brightness H and R, G, B three colors, expressing the gray value of each pixel point in the image by using the H brightness value, and completing the grayscale processing of the whole original lane line image by adopting a cvCvtColor function in OpenCV to obtain the lane line image of a single channel HSV;
s13, performing image noise reduction and threshold segmentation, eliminating noise in the lane line image by adopting a media _ SortNet function in OpenCV, and performing threshold segmentation on the lane line image by adopting an OTSU algorithm to obtain a preprocessed lane line image.
Preferably, the image edge extraction of S2 includes the following steps:
s21, smoothing the image by using a Gaussian filter, carrying out weighted average on the lane line image, wherein the value of any pixel point in the image is obtained by carrying out weighted average on the value of the pixel point and the values of other pixel points in the neighborhood,
Figure 100002_DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE004
in order to be a probability,
Figure 100002_DEST_PATH_IMAGE006
is a value of the radius of the Gaussian,
Figure 100002_DEST_PATH_IMAGE008
s22, determining the gradient amplitude and direction, and performing edge detection calculation in the horizontal and vertical directions by using a cable operator in combination with the lane line image to obtain the gradient amplitude and gradient direction of the corresponding image;
s23, image edge quantization, namely performing non-maximum suppression on the gradient amplitude along the gradient direction to finish the image edge quantization;
s24, edge thinning, namely selecting any one pixel point on the lane line image, comparing the center of the field of the pixel point with two adjacent pixel points in the gradient direction, if the center pixel of the pixel point is the maximum value, reserving the pixel point, and if not, setting the center pixel of the pixel point to be 0;
s25, edge connection, namely, detecting and connecting edges by using a dual-threshold algorithm, selecting two coefficients as thresholds, wherein one coefficient is a high threshold TH, the other coefficient is a low threshold TL, TH =0.2 and TL =0.1 are taken, and marking pixel points smaller than the thresholds as 0 and discarding the pixel points; points greater than the threshold are marked as 1.
Preferably, the determining the gradient magnitude and direction in S22 includes the following steps:
s221, carrying out convolution calculation by using a cable operator and combining with the lane line image, carrying out convolution calculation on dx and dy by using cable horizontal and vertical operators and the input image, wherein the calculation formula is as follows,
Figure 100002_DEST_PATH_IMAGE010
Figure 100002_DEST_PATH_IMAGE012
Figure 100002_DEST_PATH_IMAGE014
Figure 100002_DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE018
in order to be the table horizontal operator, the method comprises the following steps of,
Figure 100002_DEST_PATH_IMAGE020
is a Soble vertical operator;
s222, further calculating to obtain the gradient amplitude of the corresponding image, wherein the calculation formula is as follows,
Figure 100002_DEST_PATH_IMAGE022
s223, determining to obtain the gradient direction of the corresponding image according to the gradient amplitude, wherein the angle expression of the gradient direction is,
Figure 100002_DEST_PATH_IMAGE024
preferably, the image edge is quantized at S23, and the quantization criteria are as follows:
the horizontal edges of the image are quantized, with a quantization scale of,
θM∈[0,22.5)∪(-22.5·0)∪(157.5,180]∪(-180,157.5];
the 135 deg. edge of the image is quantized, with the quantization scale,
θM∈[22.5,67.5)∪[-157.5,-112.5);
the vertical edges of the image are quantized, with the quantization criterion,
θM∈[67.5,112.5]∪[-112.5,-67.5];
the 45 deg. edges of the image are quantized, with the quantization scale,
θM∈[112.5,157.5]∪[-67.5,-22.5]。
preferably, in the Hough transform processing of S3, the improved Hough transform mode is: and randomly selecting edge points in the binary distribution map to perform Hough transformation.
Preferably, the Hough transform process of S3 includes the following steps:
s31, randomly selecting edge points from the lane line image after edge extraction, and if the edge points are marked as points on a certain straight line, randomly extracting one edge point from the rest edge points until all the edge points are extracted;
s32, carrying out Hough transformation on the extracted edge points, and then carrying out accumulation and calculation;
s33, selecting a point with the maximum accumulated result, if the accumulated result is higher than the set high threshold value TH, performing the next step, otherwise, returning to S31;
and S34, taking the selected point in S33 as a starting point, finding out two end points of the straight line along the direction of the straight line displacement, then calculating the length of the line segment, if the length is greater than the set high threshold value TH, considering the line segment as a lane line, and then returning to S31.
The advantages of the invention are mainly embodied in the following aspects:
the lane line detection method based on the OpenCV provided by the invention is based on the OpenCV, well completes detection and identification of dotted lines in roads and implementation, greatly overcomes the defects of various existing detection methods in the aspects of real-time performance and accuracy, effectively realizes real-time detection and accurate identification of lane lines in a vehicle driving environment, meets the increasing intelligent identification requirement of roads, and lays a solid foundation for improving the road management efficiency and ensuring the smooth operation of urban traffic.
In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to other technical schemes related to lane line detection in the same field, and has very wide application prospect.
The following detailed description of the embodiments of the present invention is provided in connection with the accompanying drawings for the purpose of facilitating understanding and understanding of the technical solutions of the present invention.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram showing a comparison of lane line images before and after ROI extraction;
FIG. 3 is a schematic illustration of a processed lane line image;
FIG. 4 is a schematic diagram of a lane line image after edge extraction;
FIG. 5 is a schematic diagram of a lane line image processed by using an improved Hough transform;
fig. 6 is a schematic diagram of the recognition effect of the present invention.
Detailed Description
The invention provides an OpenCV-based lane line detection method, which well completes detection and identification of dotted lines in a road and implementation based on OpenCV, and greatly improves the defects of the existing various detection methods in the aspects of real-time performance and accuracy.
As shown in fig. 1, a lane line detection method based on OpenCV includes the following steps:
and S1, preprocessing the image, namely preprocessing the original lane line image obtained by the vehicle-mounted camera to obtain a preprocessed lane line image.
The detailed operation of this step is as follows.
And S11, extracting image ROI. The lane marking picture obtained by the vehicle-mounted camera is generally an image with 616 × 808 pixels, has higher resolution, and includes a large amount of image data, such as non-lane marking objects like sky, trees, buildings, vehicles, and the like. In order to reduce the time consumption of lane line detection and improve the accuracy and efficiency of lane line detection, a region to be processed needs to be outlined in a frame mode from a processed image, that is, a cvSetImageROI function in OpenCV is used to cut and select a region of interest (ROI) of a picture shot by a vehicle-mounted camera, and the result is shown in fig. 2.
And S12, performing image graying processing. The image shot by the vehicle-mounted camera is a picture of a color RGB (red, green and blue) color model, but because the RGB model picture is a color space which depends on hardware and devices, the described color is not completely intuitive and is not uniform visually. And because the lane line in the gray level image is greatly different from the road surface background, the system can quickly and completely extract the lane line, so that the original lane line image of the color three-channel RGB needs to be converted.
The weighted average method is used here, the correspondence of the brightness H and R, G, B three colors is established according to the change relation of RGB and HSV color space and the proportion distribution of the photosensitive intensity of three different photosensitive cells in human eyes, the gray value of each pixel point in the image is expressed by the H brightness value, the gray processing of the whole original lane line image is completed by adopting the cvCvtColor function in OpenCV, and the lane line image of a single channel HSV is obtained.
S13, image noise reduction and threshold segmentation, wherein median filtering in nonlinear smooth filtering in the spatial filtering technology, namely, a media _ SortNet function in OpenCV is adopted to eliminate noise in the lane line image, so that the boundary is not blurred while the noise is eliminated. In order to simplify the post-processing steps and reduce the calculation amount, the OTSU algorithm is adopted to perform threshold segmentation on the lane line image, the value is 160, and the preprocessed lane line image is obtained. The processed lane line image is shown in fig. 3.
And S2, extracting image edges, and performing edge extraction processing on the preprocessed lane line image by adopting Canny to obtain the lane line image after edge extraction.
The detailed operation of this step is as follows.
S21, smoothing the image by using a Gaussian filter, carrying out weighted average on the lane line image, wherein the value of any pixel point in the image is obtained by carrying out weighted average on the value of the pixel point and the values of other pixel points in the neighborhood, the one-dimensional Gaussian distribution formula is adopted to realize the image smoothing, the calculation formula is as follows,
Figure DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 911572DEST_PATH_IMAGE004
in order to be a probability,
Figure 563133DEST_PATH_IMAGE006
is a value of the radius of the Gaussian,
Figure 483815DEST_PATH_IMAGE008
s22, determining the gradient amplitude and direction, and using a cable operator to perform edge detection calculation in the horizontal and vertical directions by combining with the lane line image to obtain the gradient amplitude and gradient direction of the corresponding image, specifically, including,
s221, calculating the gradient amplitude and direction by using a first-order partial derivative finite difference, performing convolution calculation by using a cable operator and combining with the lane line image, wherein the calculation formula is as follows,
Figure 692074DEST_PATH_IMAGE010
Figure 240867DEST_PATH_IMAGE012
Figure 201345DEST_PATH_IMAGE014
Figure 937220DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 995175DEST_PATH_IMAGE018
in order to be the table horizontal operator, the method comprises the following steps of,
Figure 211524DEST_PATH_IMAGE020
is a Soble vertical operator.
S222, further calculating to obtain the gradient amplitude of the corresponding image, wherein the calculation formula is as follows,
Figure 939308DEST_PATH_IMAGE022
to simplify the calculation, the gradient magnitudes can be approximated as follows,
Figure DEST_PATH_IMAGE036
s223, determining to obtain the gradient direction of the corresponding image according to the gradient amplitude, wherein the angle expression of the gradient direction is,
Figure 834583DEST_PATH_IMAGE024
and S23, image edge quantization, namely performing non-maximum suppression on the gradient amplitude along the gradient direction to finish the image edge quantization.
The quantization standard is as follows,
when the horizontal edge of the image is quantized, i.e. the gradient direction is vertical, the quantization scale is,
θM∈[0,22.5)∪(-22.5·0)∪(157.5,180]∪(-180,157.5];
at 135 deg. edge quantization of the image, i.e. 45 deg. gradient direction, the quantization criterion is,
θM∈[22.5,67.5)∪[-157.5,-112.5);
when the vertical edge of the image is quantized, i.e. the gradient direction is horizontal, the quantization criterion is,
θM∈[67.5,112.5]∪[-112.5,-67.5];
at 45 deg. edge quantization of the image, i.e. a gradient direction of 135 deg., the quantization criterion is,
θM∈[112.5,157.5]∪[-67.5,-22.5]。
s24, edge thinning, selecting any pixel point on the lane line image, comparing the center of the field with two adjacent pixel points along the gradient direction, if the center pixel of the pixel point is the maximum value, keeping, otherwise, setting the center pixel of the pixel point to 0, thus inhibiting the non-maximum value, keeping the point with the maximum local gradient, and obtaining the thinned edge.
S25, edge connection, namely, detecting and connecting edges by using a dual-threshold algorithm, selecting two coefficients as thresholds, wherein one coefficient is a high threshold TH, the other coefficient is a low threshold TL, TH =0.2 and TL =0.1 are taken, and marking pixel points smaller than the thresholds as 0 and discarding the pixel points; points greater than the threshold are marked as 1.
The image after edge extraction is shown in fig. 4.
And S3, Hough transformation processing, namely Hough transformation is carried out on the lane line image after the edge extraction by using an improved Hough transformation mode, and lane line detection is completed. The improved Hough transformation mode is that edge points are randomly selected in a binary distribution diagram to carry out Hough transformation.
The detailed operation of this step is as follows.
S31, randomly selecting edge points from the lane line image after edge extraction, and if the edge points are marked as points on a certain straight line, randomly extracting one edge point from the rest edge points until all the edge points are extracted;
s32, carrying out Hough transformation on the extracted edge points, and then carrying out accumulation and calculation;
s33, selecting a point with the maximum accumulated result, if the accumulated result is higher than the set high threshold value TH, performing the next step, otherwise, returning to S31;
and S34, taking the selected point in S33 as a starting point, finding out two end points of the straight line along the direction of the straight line displacement, then calculating the length of the line segment, if the length is greater than the set high threshold value TH, considering the line segment as a lane line, and then returning to S31.
In this embodiment, the Hough transform processing in S3 completes Hough transform of an image mainly according to a cv2.houghlinesp function in OpenCV, and the detected lane line is as shown in fig. 5. The final recognition effect of the scheme is shown in fig. 6.
The invention effectively realizes the real-time detection and accurate identification of the lane line under the driving environment of the vehicle, meets the increasing intelligent identification requirement of the highway, and lays a solid foundation for improving the management efficiency of the highway and ensuring the smooth operation of urban traffic.
In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to other technical schemes related to lane line detection in the same field, and has very wide application prospect.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (7)

1. An OpenCV-based lane line detection method is characterized by comprising the following steps:
s1, preprocessing images, namely preprocessing the original lane line images obtained by the vehicle-mounted camera to obtain preprocessed lane line images;
s2, extracting image edges, namely performing edge extraction processing on the preprocessed lane line image by adopting Canny to obtain the lane line image after edge extraction;
and S3, Hough transformation processing, namely Hough transformation is carried out on the lane line image after the edge extraction by using an improved Hough transformation mode, and lane line detection is completed.
2. The OpenCV-based lane line detection method according to claim 1, wherein the image preprocessing at S1 includes:
s11, extracting an image ROI, and cutting and selecting an interested area of the image shot by the vehicle-mounted camera by adopting a cvSetImageROI function in OpenCV;
s12, performing image graying, converting the original lane line image of the color three-channel RGB, establishing correspondence of brightness H and R, G, B three colors, expressing the gray value of each pixel point in the image by using the H brightness value, and completing the grayscale processing of the whole original lane line image by adopting a cvCvtColor function in OpenCV to obtain the lane line image of a single channel HSV;
s13, performing image noise reduction and threshold segmentation, eliminating noise in the lane line image by adopting a media _ SortNet function in OpenCV, and performing threshold segmentation on the lane line image by adopting an OTSU algorithm to obtain a preprocessed lane line image.
3. The OpenCV-based lane line detection method according to claim 1, wherein the image edge extraction at S2 includes:
s21, smoothing the image by using a Gaussian filter, carrying out weighted average on the lane line image, wherein the value of any pixel point in the image is obtained by carrying out weighted average on the value of the pixel point and the values of other pixel points in the neighborhood,
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
in order to be a probability,
Figure DEST_PATH_IMAGE006
is a value of the radius of the Gaussian,
Figure DEST_PATH_IMAGE008
s22, determining the gradient amplitude and direction, and performing edge detection calculation in the horizontal and vertical directions by using a cable operator in combination with the lane line image to obtain the gradient amplitude and gradient direction of the corresponding image;
s23, image edge quantization, namely performing non-maximum suppression on the gradient amplitude along the gradient direction to finish the image edge quantization;
s24, edge thinning, namely selecting any one pixel point on the lane line image, comparing the center of the field of the pixel point with two adjacent pixel points in the gradient direction, if the center pixel of the pixel point is the maximum value, reserving the pixel point, and if not, setting the center pixel of the pixel point to be 0;
s25, edge connection, namely, detecting and connecting edges by using a dual-threshold algorithm, selecting two coefficients as thresholds, wherein one coefficient is a high threshold TH, the other coefficient is a low threshold TL, TH =0.2 and TL =0.1 are taken, and marking pixel points smaller than the thresholds as 0 and discarding the pixel points; points greater than the threshold are marked as 1.
4. The OpenCV-based lane line detection method according to claim 3, wherein the determining the gradient magnitude and direction in S22 includes:
s221, carrying out convolution calculation by using a cable operator and combining with the lane line image, carrying out convolution calculation on dx and dy by using cable horizontal and vertical operators and the input image, wherein the calculation formula is as follows,
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE018
in order to be the table horizontal operator, the method comprises the following steps of,
Figure DEST_PATH_IMAGE020
is a Soble vertical operator;
s222, further calculating to obtain the gradient amplitude of the corresponding image, wherein the calculation formula is as follows,
Figure DEST_PATH_IMAGE022
s223, determining to obtain the gradient direction of the corresponding image according to the gradient amplitude, wherein the angle expression of the gradient direction is,
Figure DEST_PATH_IMAGE024
5. the OpenCV-based lane line detection method according to claim 3, wherein the image edges are quantized at S23 according to the following quantization criteria:
the horizontal edges of the image are quantized, with a quantization scale of,
θM∈[0,22.5)∪(-22.5·0)∪(157.5,180]∪(-180,157.5];
the 135 deg. edge of the image is quantized, with the quantization scale,
θM∈[22.5,67.5)∪[-157.5,-112.5);
the vertical edges of the image are quantized, with the quantization criterion,
θM∈[67.5,112.5]∪[-112.5,-67.5];
the 45 deg. edges of the image are quantized, with the quantization scale,
θM∈[112.5,157.5]∪[-67.5,-22.5]。
6. the OpenCV-based lane line detection method according to claim 1, wherein in the Hough transformation processing at S3, the improved Hough transformation mode is: and randomly selecting edge points in the binary distribution map to perform Hough transformation.
7. The OpenCV-based lane line detection method according to claim 3, wherein the Hough transformation processing at S3 includes:
s31, randomly selecting edge points from the lane line image after edge extraction, and if the edge points are marked as points on a certain straight line, randomly extracting one edge point from the rest edge points until all the edge points are extracted;
s32, carrying out Hough transformation on the extracted edge points, and then carrying out accumulation and calculation;
s33, selecting a point with the maximum accumulated result, if the accumulated result is higher than the set high threshold value TH, performing the next step, otherwise, returning to S31;
and S34, taking the selected point in S33 as a starting point, finding out two end points of the straight line along the direction of the straight line displacement, then calculating the length of the line segment, if the length is greater than the set high threshold value TH, considering the line segment as a lane line, and then returning to S31.
CN201911271789.3A 2019-12-12 2019-12-12 Lane line detection method based on OpenCV Pending CN111652033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911271789.3A CN111652033A (en) 2019-12-12 2019-12-12 Lane line detection method based on OpenCV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911271789.3A CN111652033A (en) 2019-12-12 2019-12-12 Lane line detection method based on OpenCV

Publications (1)

Publication Number Publication Date
CN111652033A true CN111652033A (en) 2020-09-11

Family

ID=72344458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911271789.3A Pending CN111652033A (en) 2019-12-12 2019-12-12 Lane line detection method based on OpenCV

Country Status (1)

Country Link
CN (1) CN111652033A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112180926A (en) * 2020-09-28 2021-01-05 湖南格兰博智能科技有限责任公司 Linear guide method and system of sweeping robot and sweeping robot
CN113296095A (en) * 2021-05-21 2021-08-24 东南大学 Target hyperbolic edge extraction method for pulse ground penetrating radar
CN113445709A (en) * 2021-07-02 2021-09-28 北京建筑大学 Ceramic tile positioning and paving method and automatic ceramic tile paving equipment
CN116778224A (en) * 2023-05-09 2023-09-19 广州华南路桥实业有限公司 Vehicle tracking method based on video stream deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314599A (en) * 2011-10-11 2012-01-11 东华大学 Identification and deviation-detection method for lane
CN109360217A (en) * 2018-09-29 2019-02-19 国电南瑞科技股份有限公司 Power transmission and transforming equipment method for detecting image edge, apparatus and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314599A (en) * 2011-10-11 2012-01-11 东华大学 Identification and deviation-detection method for lane
CN109360217A (en) * 2018-09-29 2019-02-19 国电南瑞科技股份有限公司 Power transmission and transforming equipment method for detecting image edge, apparatus and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜良超等: "基于OpenCV的车道线检测", 《智能车》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112180926A (en) * 2020-09-28 2021-01-05 湖南格兰博智能科技有限责任公司 Linear guide method and system of sweeping robot and sweeping robot
CN112180926B (en) * 2020-09-28 2023-10-03 湖南格兰博智能科技有限责任公司 Linear guiding method and system of sweeping robot and sweeping robot
CN113296095A (en) * 2021-05-21 2021-08-24 东南大学 Target hyperbolic edge extraction method for pulse ground penetrating radar
CN113296095B (en) * 2021-05-21 2023-12-22 东南大学 Target hyperbola edge extraction method for pulse ground penetrating radar
CN113445709A (en) * 2021-07-02 2021-09-28 北京建筑大学 Ceramic tile positioning and paving method and automatic ceramic tile paving equipment
CN116778224A (en) * 2023-05-09 2023-09-19 广州华南路桥实业有限公司 Vehicle tracking method based on video stream deep learning

Similar Documents

Publication Publication Date Title
CN109785291B (en) Lane line self-adaptive detection method
CN109886896B (en) Blue license plate segmentation and correction method
CN108280450B (en) Expressway pavement detection method based on lane lines
Son et al. Real-time illumination invariant lane detection for lane departure warning system
CN108038416B (en) Lane line detection method and system
CN111652033A (en) Lane line detection method based on OpenCV
Wu et al. Lane-mark extraction for automobiles under complex conditions
CN112819094B (en) Target detection and identification method based on structural similarity measurement
CN110210451B (en) Zebra crossing detection method
CN109784344A (en) A kind of non-targeted filtering method of image for ground level mark identification
US20190340446A1 (en) Shadow removing method for color image and application
CN107705254B (en) City environment assessment method based on street view
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN106778551B (en) Method for identifying highway section and urban road lane line
CN106686280A (en) Image repairing system and method thereof
CN102938057B (en) A kind of method for eliminating vehicle shadow and device
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN117094914B (en) Smart city road monitoring system based on computer vision
CN107563331B (en) Road sign line detection method and system based on geometric relationship
Li et al. A lane marking detection and tracking algorithm based on sub-regions
CN107563301A (en) Red signal detection method based on image processing techniques
CN106803073B (en) Auxiliary driving system and method based on stereoscopic vision target
FAN et al. Robust lane detection and tracking based on machine vision
CN113053164A (en) Parking space identification method using look-around image
CN113792583A (en) Obstacle detection method and system based on drivable area and intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210209

Address after: Room 518, 5th floor, building 4, yard 5, Liangshuihe 2nd Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Beijing aoyikesi Technology Co.,Ltd.

Address before: 215000 south of Lianyang Road, east of Shunfeng Road, Wujiang Economic and Technological Development Zone, Suzhou City, Jiangsu Province

Applicant before: SUZHOU AECS AUTOMOTIVE ELECTRONICS Co.,Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200911