CN108052904B - Method and device for acquiring lane line - Google Patents

Method and device for acquiring lane line Download PDF

Info

Publication number
CN108052904B
CN108052904B CN201711332712.3A CN201711332712A CN108052904B CN 108052904 B CN108052904 B CN 108052904B CN 201711332712 A CN201711332712 A CN 201711332712A CN 108052904 B CN108052904 B CN 108052904B
Authority
CN
China
Prior art keywords
lane line
probability
pixel
color
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711332712.3A
Other languages
Chinese (zh)
Other versions
CN108052904A (en
Inventor
于洋
王巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University of Technology
Original Assignee
Liaoning University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University of Technology filed Critical Liaoning University of Technology
Priority to CN201711332712.3A priority Critical patent/CN108052904B/en
Publication of CN108052904A publication Critical patent/CN108052904A/en
Application granted granted Critical
Publication of CN108052904B publication Critical patent/CN108052904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for acquiring a lane line, wherein the method comprises the following steps: carrying out inverse perspective transformation on the interesting region image in the road to be detected to obtain a transformed interesting region image; using YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using the color space model; normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line; obtaining a lane line gray probability map according to the gray probability of each pixel belonging to the lane line; performing region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map; and processing the binary segmentation result graph, and acquiring a lane line in the road to be detected. The method and the device for acquiring the lane line improve the detection precision of the lane line.

Description

Method and device for acquiring lane line
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a method and a device for acquiring a lane line.
Background
Automotive driver assistance systems are commonly used to avoid erroneous operation and fatigue driving by the driver. When the vehicle deviates from a correct lane or is about to collide, the driver is reminded or the vehicle is automatically controlled to be in a safe state, and the safety of the driver is further improved. In the car driving assist system, the accuracy and robustness of lane line detection are very important.
In order to improve the detection accuracy of the lane line, the conventional method generally detects the lane line based on the lane line color feature and the lane line edge gray scale value feature, but the method based on the lane line color feature requires that the lane line has a sharp contrast color, and the conventional method does not necessarily have a sharp contrast between the colors of the lane line, and thus the accuracy of detecting the lane line by the method is not high. In addition, for the method based on the gray-scale value features of the edges of the lane lines, if the similarity between the gray-scale value features of the edges of the lane lines is high on the roads with more safety marks, the accuracy of detecting the lane lines by adopting the method is not high.
Therefore, the accuracy of lane line detection is not high by using the conventional lane line detection method.
Disclosure of Invention
The invention provides a method and a device for acquiring a lane line, which are used for improving the detection precision of the lane line.
The embodiment of the invention provides a method for acquiring a lane line, which comprises the following steps:
carrying out inverse perspective transformation on the interesting region image in the road to be detected to obtain a transformed interesting region image;
using YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using a color space model;
normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
obtaining a lane line gray probability map according to the gray probability of each pixel belonging to the lane line;
performing region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map;
and processing the binary segmentation result graph, and acquiring a lane line in the road to be detected.
In an embodiment of the present invention, the YC is adoptedbCrThe color space model determines each of the transformed region of interest mapsA color probability that a pixel belongs to a lane line, comprising:
according to
Figure GDA0003199357760000021
Determining the color probability of each pixel in the transformed region of interest map belonging to a lane line;
wherein, P (C)i| x) represents the probability that a pixel belongs to a lane line, N-1 represents that the lane line has N-1 types of colors in common, CiFor the ith type of color, the color of each pixel can be expressed as: x ═ Y, Cb,Cr) And x ═ Y, Cb,Cr) Y is a luminance component, CbIs the blue chrominance component, CrIs the red chrominance component, P (x) is the color probability of pixel x, P (x | C)i) In order to be the probability of a possibility,
Figure GDA0003199357760000022
for a priori probability, # CiIs color CiThe number of samples in a class, K represents the K-th color.
In an embodiment of the present invention, the normalizing the color probability of each pixel belonging to a lane line to obtain the gray level probability of each pixel belonging to the lane line includes:
according to
Figure GDA0003199357760000023
Normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
wherein the content of the first and second substances,
Figure GDA0003199357760000024
representing the gray level probability of a pixel belonging to a lane line.
In an embodiment of the present invention, the processing the binary segmentation result map and acquiring a lane line in the road to be detected includes:
performing morphological processing on the binary segmentation result graph to obtain a processed binary segmentation result graph;
and processing the processed binary segmentation result graph, and acquiring a lane line in the road to be detected.
In an embodiment of the present invention, the processing the processed binary segmentation result map, and acquiring a lane line in the road to be detected includes:
detecting an edge curve in the processed binary segmentation result graph by adopting a Sobel algorithm;
performing center line operation processing on the edge curve to obtain a processed center curve;
and carrying out segmentation processing on the central curve by adopting Hall Hough transformation, and fitting a plurality of straight line segments obtained after the segmentation processing to obtain the lane line in the road to be detected.
In an embodiment of the present invention, the detecting an edge curve in the processed binary segmentation result graph by using a Sobel algorithm includes:
acquiring a horizontal value and a longitudinal value of each pixel in the processed binary segmentation result image by adopting the Sobel algorithm;
determining the gradient value and the direction of each pixel according to the transverse value and the longitudinal value of each pixel;
when the gradient value and the direction of each pixel are larger than a preset threshold value and the gradient value is within a certain preset range, determining each pixel as an edge point of the processed binary segmentation result image;
and detecting an edge curve in the processed binary segmentation result graph according to each edge point.
In an embodiment of the present invention, after the obtaining of the lane line in the road to be detected, the method includes:
and displaying the lane lines in the road to be detected on a map.
An embodiment of the present invention further provides an apparatus for acquiring a lane line, including:
the transformation unit is used for carrying out inverse perspective transformation on the interesting region image in the road to be detected to obtain a transformed interesting region image;
a determination unit for adopting YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using a color space model;
the processing unit is used for carrying out normalization processing on the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
the generating unit is used for obtaining a lane line gray probability map according to the gray probability that each pixel belongs to the lane line;
the segmentation unit is used for carrying out region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map;
and the acquisition unit is used for processing the binary segmentation result graph and acquiring the lane line in the road to be detected.
In an embodiment of the invention, the determining unit is specifically configured to determine the position of the mobile terminal according to
Figure GDA0003199357760000031
N, determining the color probability of each pixel in the transformed region of interest image belonging to a lane line;
wherein, P (C)i| x) represents the probability that a pixel belongs to a lane line, N-1 represents that the lane line has N-1 types of colors in common, CiFor the ith type of color, the color of each pixel can be expressed as: x ═ Y, Cb,Cr) And x ═ Y, Cb,Cr) Y is a luminance component, CbIs the blue chrominance component, CrIs the red chrominance component, P (x) is the color probability of pixel x, P (xC)i) In order to be the probability of a possibility,
Figure GDA0003199357760000032
for a priori probability, # CiIs color CiThe number of samples in a class, K represents the K-th color.
In one embodiment of the present invention, the processing unit,in particular for according to
Figure GDA0003199357760000041
Normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
wherein the content of the first and second substances,
Figure GDA0003199357760000042
representing the gray level probability of a pixel belonging to a lane line.
In an embodiment of the present invention, the obtaining unit is specifically configured to perform morphological processing on the binary segmentation result map to obtain a processed binary segmentation result map; and processing the processed binary segmentation result graph, and acquiring a lane line in the road to be detected.
In an embodiment of the present invention, the obtaining unit is specifically configured to detect an edge curve in the processed binary segmentation result graph by using a Sobel algorithm; performing center line operation processing on the edge curve to obtain a processed center curve; and carrying out segmentation processing on the central curve by adopting Hall Hough transformation, and fitting a plurality of straight line segments obtained after the segmentation processing to obtain the lane line in the road to be detected.
In an embodiment of the present invention, the obtaining unit is specifically configured to obtain, by using the Sobel algorithm, a horizontal value and a vertical value of each pixel in the processed binary segmentation result map; determining the gradient value and the direction of each pixel according to the transverse value and the longitudinal value of each pixel; when the gradient value and the direction of each pixel are larger than a preset threshold value and the gradient value is within a certain preset range, determining each pixel as an edge point of the processed binary segmentation result image; and detecting an edge curve in the processed binary segmentation result graph according to each edge point.
In an embodiment of the present invention, the lane line acquiring device further includes a display unit;
and the display unit is used for displaying the lane lines in the road to be detected on a map.
The embodiment of the invention provides a method for acquiring a lane line, which comprises the steps of firstly carrying out inverse perspective transformation on an interested region image in a road to be detected to obtain a transformed interested region image; using YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using the color space model; normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line; obtaining a lane line gray probability map according to the gray probability of each pixel belonging to the lane line; performing region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map; and processing the binary segmentation result graph, and acquiring the lane line in the road to be detected, so that the accuracy of lane line detection is improved.
Drawings
Fig. 1 is a schematic diagram of a lane line acquisition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a camera position provided by an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a size dimension of a captured frame image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a region of interest map before and after inverse perspective transformation of the region of interest map provided by the embodiment of the present invention;
fig. 5 is a schematic diagram of a binary segmentation result graph according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of processing a binary segmentation result graph according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a binary segmentation result graph after morphological processing according to an embodiment of the present invention;
FIG. 8 is a schematic illustration of a processed center curve according to an embodiment of the present invention;
fig. 9 is a schematic diagram of two lane lines obtained after tracking according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a lane line acquisition device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another lane line acquisition device according to an embodiment of the present invention.
Detailed Description
Reference will now be made to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art, detection is generally performed by a method based on lane line color features and a method based on lane line edge gray value features, but the method based on the lane line color features requires that lane lines have bright contrast colors, and the conventional lane lines do not necessarily have bright contrast among colors, so that the accuracy of detecting the lane lines by the method is not high. In addition, for the method based on the gray-scale value characteristics of the edges of the lane lines, if the similarity between the gray-scale value characteristics of the edges of the lane lines is high on the roads with more safety marks, the accuracy of detecting the lane lines by adopting the method is not high, so that the detection of the lane lines is causedIs not accurate. In order to improve the accuracy of lane line detection, the embodiment of the invention provides a lane line acquisition method, which comprises the steps of firstly carrying out inverse perspective transformation on an interested region image in a road to be detected to obtain a transformed interested region image; using YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using the color space model; normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line; obtaining a lane line gray probability map according to the gray probability of each pixel belonging to the lane line; performing region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map; and processing the binary segmentation result graph, and acquiring the lane line in the road to be detected, so that the accuracy of lane line detection is improved.
The technical solution of the present invention will be described below with specific examples. The following specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a lane line acquisition method according to an embodiment of the present invention, where the lane line acquisition method may be implemented by a lane line acquisition device, and the lane line acquisition device may be independently arranged or integrated in other devices, as shown in fig. 1, the lane line acquisition method may include:
s101, performing inverse perspective transformation on the interesting region image in the road to be detected to obtain a transformed interesting region image.
After determining an original image of a road to be detected, removing a sky area in an interested area in the original image to obtain an interested area map; the region of interest is then subjected to an inverse perspective transformation, for example, when the inverse perspective transformation is performed, lane lines may be converted from a camera view to an aerial view by using camera parameters, the lane lines after the inverse perspective transformation are parallel and have the same width, and then lanes are detected using a filter or geometric constraint.The road map in the image coordinate system can be transformed into the world coordinate system using an inverse perspective transformation. Referring to fig. 2 and fig. 3, fig. 2 is a schematic diagram of a camera position according to an embodiment of the present invention, and fig. 3 is a schematic diagram of a size of a captured frame image according to an embodiment of the present invention. Coordinates in the region of interest map after inverse perspective transformation are x, y, z, gamma and theta, the yaw angle and the pitch angle of the camera are provided, and the visual angle range of the camera is 2 alpha. The position of the camera relative to the ground plane is d, h, l, the coordinates in the initial frame image are u, v, and the size of the initial frame image is Rx,Ry
Wherein, the inverse perspective transformation model can be:
Figure GDA0003199357760000071
z=0
after the region of interest map is subjected to the inverse perspective transformation, a region of interest map after the inverse perspective transformation may be generated, please refer to fig. 4, where fig. 4 is a schematic diagram of the region of interest map before the region of interest map is subjected to the inverse perspective transformation and after the transformation according to an embodiment of the present invention.
S102, adopting YCbCrThe color space model determines the color probability that each pixel in the transformed region of interest map belongs to the lane line.
After obtaining the transformed region of interest map through S101, YC may be usedbCrThe color space model determines the color probability that each pixel in the region-of-interest map after inverse perspective transformation belongs to the lane line, because YC is performed under different illumination conditionsbCrThe space has better performance than RGB space, and the storage capacity of the color image collected by the camera can be reduced based on human visual characteristics, therefore, the region-of-interest map after inverse perspective transformation can be firstly converted into YC from RGB color spacebCrSpace, where Y is a luminance component, CbIs the blue chrominance component, CrIs a red chrominance component, passing through the YCbCrColor spaceThe inter-model determines the color probability that each pixel in the transformed region of interest map belongs to the lane line.
In particular, can be prepared by
Figure GDA0003199357760000072
RGB color space conversion to YCbCrA space.
Optionally, in this embodiment of the present invention, S102 employs YCbCrThe color space model determines the color probability that each pixel in the transformed region of interest map belongs to the lane line, and the color space model comprises the following steps:
according to
Figure GDA0003199357760000073
And determining the color probability of each pixel in the transformed region of interest map belonging to the lane line.
Wherein, P (C)i| x) represents the probability that a pixel belongs to a lane line, N-1 represents that the lane line has N-1 types of colors in common, CiFor the ith type of color, the color of each pixel can be expressed as: x ═ Y, Cb,Cr) And x ═ Y, Cb,Cr) Y is a luminance component, CbIs the blue chrominance component, CrIs the red chrominance component, P (x) is the color probability of pixel x,
Figure GDA0003199357760000076
in order to be the probability of a possibility,
Figure GDA0003199357760000074
Figure GDA0003199357760000075
is a priori probability, wherein # CiIs color CiThe number of samples in a class, K represents the K-th color.
S103, normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line.
After the color probability that each pixel in the transformed region of interest map belongs to the lane line is determined through S102, normalization processing may be performed on the color probability that each pixel belongs to the lane line, so as to obtain the gray level probability that each pixel belongs to the lane line. Optionally, in step S103, performing normalization processing on the color probability that each pixel belongs to the lane line to obtain the gray level probability that each pixel belongs to the lane line, where the method includes:
according to
Figure GDA0003199357760000081
Normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
wherein the content of the first and second substances,
Figure GDA0003199357760000082
representing the gray level probability of a pixel belonging to a lane line.
And S104, obtaining a lane line gray probability map according to the gray probability of each pixel belonging to the lane line.
After the color probability that each pixel belongs to the lane line is normalized through S103 to obtain the gray level probability that each pixel belongs to the lane line, a lane line gray level probability map may be obtained according to the gray level probability that each pixel belongs to the lane line, in which a lane line region has a higher density and a background region has a lower density, but may contain a large amount of false alarm information. Therefore, median filtering can be used to eliminate spike noise from the image, highlighting the edges and details of the image.
It should be noted that, in the embodiment of the present invention, after the region-of-interest map in the road to be detected is subjected to inverse perspective transformation through S101 to obtain the transformed region-of-interest map, the transformed region-of-interest map may be directly processed by using an RGB color space model to obtain a processed lane line color probability map, and then the lane line color probability model map is subjected to region segmentation processing by using a clustering algorithm to obtain a binary segmentation result map.
And S105, carrying out region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map.
For example, the clustering algorithm may be a partial information fuzzy C-means clustering algorithm, or a K-means clustering algorithm, or of course, other clustering algorithms may be used. For example, in the embodiment of the present invention, the clustering algorithm is a partial information fuzzy C-means clustering algorithm. The fuzzy C-means clustering algorithm for local information is a clustering algorithm which determines the degree of each point belonging to a certain cluster by using the membership degree of the point.
The specific algorithm is as follows:
(1) first according to
Figure GDA0003199357760000091
Calculating the center of the K type;
(2) according to
Figure GDA0003199357760000092
Determining a minimization objective function; wherein G iskiIs the number of the ambiguity factors that are,
Figure GDA0003199357760000093
dijis the spatial euclidean distance between pixels i and j;
(3) then according to
Figure GDA0003199357760000094
Updating the gray value xjIterating the above steps until max | V with respect to the class k fuzzy membership(b)-V(b+1)L < ε, where V(b)B is cycle number for fuzzy partition matrix, so as to obtain a binary partition result image of the lane line. Referring to fig. 5, fig. 5 is a schematic diagram of a binary segmentation result graph according to an embodiment of the present invention.
And S106, processing the binary segmentation result graph, and acquiring a lane line in the road to be detected.
After the binary segmentation result map is obtained in S105, the binary segmentation result map may be processed, and a lane line in the road to be detected may be obtained. It follows that a vehicle in acquiring a road to be detectedWhen the lane is marked, before the clustering algorithm is adopted to carry out region segmentation processing on the lane gray level probability model map, YC is firstly adoptedbCrThe color space model determines the color probability that each pixel in the transformed region-of-interest image belongs to the lane line, the color probability that each pixel belongs to the lane line is normalized to obtain the gray level probability that each pixel belongs to the lane line, the lane line gray level probability image is obtained according to the gray level probability that each pixel belongs to the lane line, and then the generated lane line gray level probability image is subjected to region segmentation, so that the accuracy of lane line detection is improved.
The embodiment of the invention provides a method for acquiring a lane line, which comprises the steps of firstly carrying out inverse perspective transformation on an interested region image in a road to be detected to obtain a transformed interested region image; using YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using the color space model; normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line; obtaining a lane line gray probability map according to the gray probability of each pixel belonging to the lane line; performing region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map; and processing the binary segmentation result graph, and acquiring the lane line in the road to be detected, so that the accuracy of lane line detection is improved.
Based on the embodiment shown in fig. 1, after obtaining the binary segmentation result map, processing the binary segmentation result map, and obtaining the lane lines in the road to be detected, may be implemented in the following manner, please refer to fig. 6, where fig. 6 is a schematic flow diagram for processing the binary segmentation result map according to the embodiment of the present invention.
S601, performing morphological processing on the binary segmentation result graph to obtain a processed binary segmentation result graph.
For example, in the embodiment of the present invention, the binary segmentation result map is subjected to morphological processing, so as to cancel and remove noise from the peak top with a span smaller than that of the rectangular structure, so that the lane line area in the processed binary segmentation result map is more complete. Referring to fig. 7, fig. 7 is a schematic diagram of a binary segmentation result graph after morphological processing according to an embodiment of the present invention.
And S602, detecting an edge curve in the processed binary segmentation result graph by adopting a Sobel algorithm.
Optionally, in this embodiment of the present invention, the step S602 of detecting an edge curve in the processed binary segmentation result graph by using a Sobel algorithm may include:
acquiring a transverse value and a longitudinal value of each pixel in the processed binary segmentation result image by using a Sobel algorithm; determining the gradient value and direction of each pixel according to the transverse value and the longitudinal value of each pixel; when the gradient value and the direction of each pixel are larger than a preset threshold value and the gradient value is within a certain preset range, determining each pixel as an edge point of the processed binary segmentation result image; and according to each edge point, detecting the processed binary segmentation result graph, and dividing an edge curve.
Specifically, the Sobel algorithm can accurately locate edges in a low-contrast image, and the horizontal and vertical approximate derivative images thereof are:
Figure GDA0003199357760000101
the gradient magnitude and direction are:
Figure GDA0003199357760000102
when the gradient of a certain pixel is larger than a certain threshold value and the direction is within a certain range, setting the pixel as an edge point of a lane line area; after each edge point is determined, the edge curve in the result map may be segmented according to the binary value after each edge point detection process. Referring to fig. 7, fig. 7 is a schematic diagram of an edge curve in a binary segmentation result graph after a Sobel algorithm detection process is performed according to an embodiment of the present invention.
And S603, performing center line operation processing on the edge curve to obtain a processed center curve.
The central line of the lane line region may be an average value of two edge lines of the lane line region, please refer to fig. 8, and fig. 8 is a schematic diagram of a processed central curve according to an embodiment of the present invention.
S604, carrying out segmentation processing on the central curve by adopting Hall Hough transformation, and fitting a plurality of straight line segments obtained after the segmentation processing to obtain a lane line in the road to be detected.
After the processed central curve is obtained in S603, a straight line may be searched for in each of the left and right portions of the bottom area of the image using Hough transform. Voting is carried out in a polar coordinate parameter space of the straight line through Hough transformation, and the accumulated local peak point is a candidate straight line. Wherein, the candidate straight line satisfies the following conditions:
(1) the included angle between the straight line and the vertical direction is required to be within 25 degrees; (2) the distance between the bottom intersection of the straight line and the bottom center point of the image cannot exceed the width of one lane in the image; (3) the two lane lines must satisfy the parallel condition; (4) when a plurality of straight lines meet the conditions, the straight line closest to the central point of the bottom is selected as the lane line on the left side and the right side respectively.
The most dominant straight line in the left and right parts of the bottom area of the image can be determined according to the four conditions. In the current frame, the fixed-length line segments at the bottom of the detected straight line are selected as initial segments of lane lines on two sides; then tracking a lane line along the direction of the initial section; the whole lane line can be approximately formed by connecting a plurality of straight line segments end to end; the angle of the straight line segment of the lane on one continuous side can be used for approximately tracking the discontinuous lane line on the other continuous side; when the lane line is interrupted or the road is shaded, the tracking program can also continuously track and detect the complete current frame lane line. Referring to fig. 9, fig. 9 is a schematic diagram of two lane lines obtained after tracking according to an embodiment of the present invention. After obtaining fig. 9, fitting to a quadratic polynomial using a least squares method may be performed, and a fitted curve equation may be obtained by solving a coefficient matrix of the fitted polynomial. In the fitting process, the approximate tracking result is converted into a high-precision fitting curve through a quadratic polynomial model, so that the reliability of detection is improved.
Optionally, in the embodiment of the present invention, after the obtaining of the lane line in the road to be detected, the method may further include:
and displaying the lane lines in the road to be detected on the map.
The Kalman filter is utilized to track the slope of the lane line of each frame, the prediction is more accurate than single measurement through the previous state and the state after the current measurement prediction, and then the accurate global map is drawn by correcting frame by frame, so that the lane line in the road to be detected is displayed on the map.
In order to verify the method for acquiring the lane line provided by the embodiment of the present invention, taking the relevant data in table 1 as an example, 300 frames of the video image sequence of the vehicle on the urban road are selected for analysis, and the road quality in the images is poor, including different environments such as rainy days and nights, and also including various curve conditions. Wherein, the size of each frame of image is 320 x 280, the average time of detecting the lane lines of each frame of image is 22 milliseconds, and the average time is 45.5 frames per second. When the vehicle speed is 100km/h, i.e. the vehicle is driving 27.8m per second, the system updates the lane line every 0.61 m. The results are shown in Table 1.
TABLE 1 detection accuracy under various environments
Figure GDA0003199357760000121
As can be seen from the test results of table 1: there are 295 frames of images that can correctly detect lane lines, with a correct rate of 98.3%. Under the condition that the lane line is partially shielded, provided with holes and shadows, the system still has a good detection result, so that the method for acquiring the lane line provided by the embodiment of the invention has good anti-interference performance.
Fig. 10 is a schematic structural diagram of a lane line acquisition apparatus 100 according to an embodiment of the present invention, and please refer to fig. 10, the lane line acquisition apparatus 100 may include:
the transformation unit 1001 is configured to perform inverse perspective transformation on the region of interest map in the road to be detected, so as to obtain a transformed region of interest map.
A determining unit 1002 for employing YCbCrThe color space model determines the color probability that each pixel in the transformed region of interest map belongs to the lane line.
The processing unit 1003 is configured to perform normalization processing on the color probability that each pixel belongs to the lane line, so as to obtain a gray level probability that each pixel belongs to the lane line.
A generating unit 1004, configured to obtain a lane line gray probability map according to the gray probability that each pixel belongs to the lane line.
A segmentation unit 1005, configured to perform region segmentation processing on the lane line gray level probability model map by using a clustering algorithm, so as to obtain a binary segmentation result map.
The obtaining unit 1006 is configured to process the binary segmentation result map and obtain a lane line in the road to be detected.
Optionally, the determining unit 1002 is specifically configured to determine according to
Figure GDA0003199357760000131
Determining the color probability of each pixel in the transformed region of interest map belonging to the lane line;
wherein, P (C)i| x) represents the probability that a pixel belongs to a lane line, N-1 represents that the lane line has N-1 types of colors in common, CiFor the ith type of color, the color of each pixel can be expressed as: x ═ Y, Cb,Cr) And x ═ Y, Cb,Cr) Y is a luminance component, CbIs the blue chrominance component, CrIs the red chrominance component, P (x) is the color probability of pixel x, P (x | C)i) In order to be the probability of a possibility,
Figure GDA0003199357760000132
for a priori probability, # CiIs color CiThe number of samples in a class, K represents the K-th color.
Optionally, the processing unit 1003 is specifically configured to execute the method according to
Figure GDA0003199357760000133
Normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
wherein the content of the first and second substances,
Figure GDA0003199357760000134
representing the gray level probability of a pixel belonging to a lane line.
Optionally, the obtaining unit 1006 is specifically configured to perform morphological processing on the binary segmentation result map to obtain a processed binary segmentation result map; and processing the processed binary segmentation result graph, and acquiring a lane line in the road to be detected.
Optionally, the obtaining unit 1006 is specifically configured to detect an edge curve in the processed binary segmentation result graph by using a Sobel algorithm; performing center line operation processing on the edge curve to obtain a processed center curve; and (3) segmenting the central curve by adopting Hall Hough transformation, and fitting a plurality of straight line segments obtained after segmentation to obtain the lane line in the road to be detected.
Optionally, the obtaining unit 1006 is specifically configured to obtain a horizontal value and a vertical value of each pixel in the processed binary segmentation result image by using a Sobel algorithm; determining the gradient value and direction of each pixel according to the transverse value and the longitudinal value of each pixel; when the gradient value and the direction of each pixel are larger than a preset threshold value and the gradient value is within a certain preset range, determining each pixel as an edge point of the processed binary segmentation result image; and according to each edge point, detecting the processed binary segmentation result graph, and dividing an edge curve.
Optionally, the lane line acquiring apparatus 100 may further include a display unit 1007, please refer to fig. 11, where fig. 11 is a schematic structural diagram of another lane line acquiring apparatus 100 according to an embodiment of the present invention.
A display unit 1007 for displaying a lane line in a road to be detected on a map.
The above lane line acquisition apparatus 100 can correspondingly implement the technical solution of the lane line acquisition method according to any embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for acquiring a lane line is characterized by comprising the following steps:
carrying out inverse perspective transformation on the interesting region image in the road to be detected to obtain a transformed interesting region image;
using YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using a color space model;
normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
obtaining a lane line gray probability map according to the gray probability of each pixel belonging to the lane line;
performing region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map;
and processing the binary segmentation result graph, and acquiring a lane line in the road to be detected.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,characterized in that YC is adoptedbCrThe color space model determines the color probability that each pixel in the transformed region of interest map belongs to the lane line, and the color space model comprises the following steps:
according to
Figure FDA0003199357750000011
Determining the color probability of each pixel in the transformed region of interest map belonging to a lane line;
wherein, P (C)i| x) represents the probability that a pixel belongs to a lane line, N-1 represents that the lane line has N-1 types of colors in common, CiFor the ith type of color, the color of each pixel can be expressed as: x ═ Y, Cb,Cr) And x ═ Y, Cb,Cr) Y is a luminance component, CbIs the blue chrominance component, CrIs the red chrominance component, P (x) is the color probability of pixel x, P (x | C)i) In order to be the probability of a possibility,
Figure FDA0003199357750000012
for a priori probability, # CiIs color CiThe number of samples in a class, K represents the K-th color.
3. The method according to claim 2, wherein the normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line comprises:
according to
Figure FDA0003199357750000013
Normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
wherein the content of the first and second substances,
Figure FDA0003199357750000014
representing the gray level probability of a pixel belonging to a lane line.
4. The method according to any one of claims 1 to 3, wherein the processing the binary segmentation result map and acquiring the lane lines in the road to be detected comprises:
performing morphological processing on the binary segmentation result graph to obtain a processed binary segmentation result graph;
and processing the processed binary segmentation result graph, and acquiring a lane line in the road to be detected.
5. The method according to claim 4, wherein the processing the processed binary segmentation result map and acquiring the lane lines in the road to be detected comprises:
detecting an edge curve in the processed binary segmentation result graph by adopting a Sobel algorithm;
performing centerline operation processing on the edge curves to obtain a processed image, wherein the processed image is provided with a center curve, and the center curve is the centerline of the two edge curves;
processing the processed image by adopting Hall Hough transformation according to the central curve to obtain initial sections of lane lines on two sides;
and tracking the lane lines along the initial section direction of the lane lines on the two sides according to the initial sections of the lane lines on the two sides, and fitting the lane lines on the two sides obtained after tracking to obtain the lane lines in the road to be detected.
6. The method according to claim 5, wherein the detecting the edge curve in the processed binary segmentation result map by using the Sobel algorithm comprises:
acquiring a horizontal value and a longitudinal value of each pixel in the processed binary segmentation result image by adopting the Sobel algorithm;
determining the gradient value and the direction of each pixel according to the transverse value and the longitudinal value of each pixel;
when the gradient value and the direction of each pixel are larger than a preset threshold value and the gradient value is within a certain preset range, determining each pixel as an edge point of the processed binary segmentation result image;
and detecting an edge curve in the processed binary segmentation result graph according to each edge point.
7. The method according to any one of claims 1 to 3, characterized in that after acquiring the lane line in the road to be detected, it comprises:
and displaying the lane lines in the road to be detected on a map.
8. An acquisition device of a lane line, comprising:
the transformation unit is used for carrying out inverse perspective transformation on the interesting region image in the road to be detected to obtain a transformed interesting region image;
a determination unit for adopting YCbCrDetermining the color probability of each pixel in the transformed region of interest image belonging to the lane line by using a color space model;
the processing unit is used for carrying out normalization processing on the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
the generating unit is used for obtaining a lane line gray probability map according to the gray probability that each pixel belongs to the lane line;
the segmentation unit is used for carrying out region segmentation processing on the lane line gray level probability model map by adopting a clustering algorithm to obtain a binary segmentation result map;
and the acquisition unit is used for processing the binary segmentation result graph and acquiring the lane line in the road to be detected.
9. The apparatus of claim 8,
the determination unit is specifically used for determining
Figure FDA0003199357750000031
Determining the color probability of each pixel in the transformed region of interest map belonging to a lane line;
wherein, P (C)i| x) represents the probability that a pixel belongs to a lane line, N-1 represents that the lane line has N-1 types of colors in common, CiFor the ith type of color, the color of each pixel can be expressed as: x ═ Y, Cb,Cr) And x ═ Y, Cb,Cr) Y is a luminance component, CbIs the blue chrominance component, CrIs the red chrominance component, P (x) is the color probability of pixel x, P (x | C)i) In order to be the probability of a possibility,
Figure FDA0003199357750000032
for a priori probability, # CiIs color CiThe number of samples in a class, K represents the K-th color.
10. The apparatus of claim 9,
the processing unit is specifically used for
Figure FDA0003199357750000033
Normalizing the color probability of each pixel belonging to the lane line to obtain the gray level probability of each pixel belonging to the lane line;
wherein the content of the first and second substances,
Figure FDA0003199357750000034
representing the gray level probability of a pixel belonging to a lane line.
CN201711332712.3A 2017-12-13 2017-12-13 Method and device for acquiring lane line Active CN108052904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711332712.3A CN108052904B (en) 2017-12-13 2017-12-13 Method and device for acquiring lane line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711332712.3A CN108052904B (en) 2017-12-13 2017-12-13 Method and device for acquiring lane line

Publications (2)

Publication Number Publication Date
CN108052904A CN108052904A (en) 2018-05-18
CN108052904B true CN108052904B (en) 2021-11-30

Family

ID=62132692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711332712.3A Active CN108052904B (en) 2017-12-13 2017-12-13 Method and device for acquiring lane line

Country Status (1)

Country Link
CN (1) CN108052904B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209777A (en) * 2018-11-21 2020-05-29 北京市商汤科技开发有限公司 Lane line detection method and device, electronic device and readable storage medium
CN111209770B (en) * 2018-11-21 2024-04-23 北京三星通信技术研究有限公司 Lane line identification method and device
CN111460866B (en) * 2019-01-22 2023-12-22 北京市商汤科技开发有限公司 Lane line detection and driving control method and device and electronic equipment
CN112131914B (en) * 2019-06-25 2022-10-21 北京市商汤科技开发有限公司 Lane line attribute detection method and device, electronic equipment and intelligent equipment
CN110595499A (en) * 2019-09-26 2019-12-20 北京四维图新科技股份有限公司 Lane change reminding method, device and system
CN112101163A (en) * 2020-09-04 2020-12-18 淮阴工学院 Lane line detection method
CN112418187A (en) * 2020-12-15 2021-02-26 潍柴动力股份有限公司 Lane line recognition method and apparatus, storage medium, and electronic device
CN112633151B (en) * 2020-12-22 2024-04-12 浙江大华技术股份有限公司 Method, device, equipment and medium for determining zebra stripes in monitoring images

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488975A (en) * 2013-09-17 2014-01-01 北京联合大学 Zebra crossing real-time detection method based in intelligent driving
CN104331873A (en) * 2013-07-22 2015-02-04 浙江大学 Method for detecting road from single image
CN104598892A (en) * 2015-01-30 2015-05-06 广东威创视讯科技股份有限公司 Dangerous driving behavior alarming method and system
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line
CN105631880A (en) * 2015-12-31 2016-06-01 百度在线网络技术(北京)有限公司 Lane line segmentation method and apparatus
CN105678285A (en) * 2016-02-18 2016-06-15 北京大学深圳研究生院 Adaptive road aerial view transformation method and road lane detection method
CN105740796A (en) * 2016-01-27 2016-07-06 大连楼兰科技股份有限公司 Grey level histogram based post-perspective transformation lane line image binarization method
CN106446864A (en) * 2016-10-12 2017-02-22 成都快眼科技有限公司 Method for detecting feasible road
CN106558051A (en) * 2015-09-25 2017-04-05 浙江大学 A kind of improved method for detecting road from single image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331873A (en) * 2013-07-22 2015-02-04 浙江大学 Method for detecting road from single image
CN103488975A (en) * 2013-09-17 2014-01-01 北京联合大学 Zebra crossing real-time detection method based in intelligent driving
CN104598892A (en) * 2015-01-30 2015-05-06 广东威创视讯科技股份有限公司 Dangerous driving behavior alarming method and system
CN106558051A (en) * 2015-09-25 2017-04-05 浙江大学 A kind of improved method for detecting road from single image
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line
CN105631880A (en) * 2015-12-31 2016-06-01 百度在线网络技术(北京)有限公司 Lane line segmentation method and apparatus
CN105740796A (en) * 2016-01-27 2016-07-06 大连楼兰科技股份有限公司 Grey level histogram based post-perspective transformation lane line image binarization method
CN105678285A (en) * 2016-02-18 2016-06-15 北京大学深圳研究生院 Adaptive road aerial view transformation method and road lane detection method
CN106446864A (en) * 2016-10-12 2017-02-22 成都快眼科技有限公司 Method for detecting feasible road

Also Published As

Publication number Publication date
CN108052904A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108052904B (en) Method and device for acquiring lane line
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
Kong et al. General road detection from a single image
US10133941B2 (en) Method, apparatus and device for detecting lane boundary
CN109657632B (en) Lane line detection and identification method
Kong et al. Vanishing point detection for road detection
Hadi et al. Vehicle detection and tracking techniques: a concise review
Wu et al. Lane-mark extraction for automobiles under complex conditions
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
US9053372B2 (en) Road marking detection and recognition
CN110647850A (en) Automatic lane deviation measuring method based on inverse perspective principle
WO2015010451A1 (en) Method for road detection from one image
CN105261017A (en) Method for extracting regions of interest of pedestrian by using image segmentation method on the basis of road restriction
CN110197173B (en) Road edge detection method based on binocular vision
CN115049700A (en) Target detection method and device
CN106558051A (en) A kind of improved method for detecting road from single image
CN109858438B (en) Lane line detection method based on model fitting
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN103093198A (en) Crowd density monitoring method and device
CN102393902A (en) Vehicle color detection method based on H_S two-dimensional histogram and regional color matching
CN109886168B (en) Ground traffic sign identification method based on hierarchy
Li et al. A lane marking detection and tracking algorithm based on sub-regions
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN112906583A (en) Lane line detection method and device
CN101369312B (en) Method and equipment for detecting intersection in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant