CN108052880B - Virtual and real lane line detection method for traffic monitoring scene - Google Patents

Virtual and real lane line detection method for traffic monitoring scene Download PDF

Info

Publication number
CN108052880B
CN108052880B CN201711229332.7A CN201711229332A CN108052880B CN 108052880 B CN108052880 B CN 108052880B CN 201711229332 A CN201711229332 A CN 201711229332A CN 108052880 B CN108052880 B CN 108052880B
Authority
CN
China
Prior art keywords
lane
line
virtual
lines
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711229332.7A
Other languages
Chinese (zh)
Other versions
CN108052880A (en
Inventor
阮雅端
陈金艳
陈林凯
郑文礼
陈钊正
陈启美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201711229332.7A priority Critical patent/CN108052880B/en
Publication of CN108052880A publication Critical patent/CN108052880A/en
Application granted granted Critical
Publication of CN108052880B publication Critical patent/CN108052880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A virtual and real lane line detection method for a traffic monitoring scene includes generating lane line images based on monitoring videos of multi-lane roads and setting a lane region of interest (ROI); then detecting lane lines in the lane ROI, clustering, classifying the lane lines based on a clustering result, dividing left and right edges of line segments of the lane lines based on the geometric characteristics of the lane lines after classification, and extracting endpoint information of each virtual lane line for the virtual lane lines; and finally, fitting the virtual lane line and the real lane line respectively according to the left edge and the right edge of the lane line, thereby obtaining a final lane line detection result. The method mainly aims at a multi-lane monitoring scene, and can effectively solve the problems of lane line shielding, lane line left and right edge division and accurate information extraction of virtual lane lines in the multi-lane scene in the traditional lane line detection method. The method has important and profound significance for applications such as automatic driving, automatic calibration and the like based on accurate lane information.

Description

Virtual and real lane line detection method for traffic monitoring scene
Technical Field
The invention belongs to the technical field of computer machine vision detection, relates to analysis of a multi-lane traffic monitoring video, and provides a method for detecting virtual and real lane lines in a traffic monitoring scene, which is used for detecting and extracting complete lane line information in the multi-lane scene.
Background
Lane line detection is one of the hot spots of current research, and can serve the fields of automatic driving, automatic calibration, video-based traffic monitoring and the like. However, some challenges still exist in the current lane line detection, such as lane line occlusion in a multi-lane scene, lane line left and right edge division, and end point information extraction of a virtual lane line. The application of lane line information in intelligent traffic monitoring puts higher and higher requirements on the accurate detection of lane lines, and the traditional algorithm is difficult to meet. At present, most of the existing lane line detection methods are directed at intelligent driving scenes, detection is carried out based on road condition images in front of vehicles, detection methods suitable for multi-lane road monitoring scenes are rare, and meanwhile, lane line information obtained by intelligent driving detection cannot completely meet the requirements of traffic monitoring.
The method for detecting the virtual lane line and the real lane line of the traffic monitoring scene can effectively solve the problems of detection of multiple lane lines, division of the left edge and the right edge of the lane lines, acquisition of the end points of the virtual lane lines and fitting of the virtual lane lines and the real lane lines. The method comprises the steps of firstly extracting a lane line image without moving vehicles by adopting GMM (Gaussian mixture model) to ensure the definition of the lane line, secondly detecting straight line segments based on a Canny algorithm and PPHT (point-to-multipoint high-speed transmission), performing K-Means clustering processing on all the straight line segments, dividing left and right edges by using the geometric characteristics of the lane line, extracting end point information of a virtual lane line, and finally fitting virtual and real lane lines respectively to accurately extract the specific information of the lane line.
Disclosure of Invention
The invention aims to solve the problems that: in the prior art, the detection of lane lines is mostly from the perspective of vehicles, a lane line detection method in a road monitoring scene is not available, and the problems of lane line shielding, lane line left and right edge division and the accurate information extraction of virtual lane lines in a multi-lane scene exist in the road monitoring scene. The method can effectively solve the problems and realize the accurate extraction of the specific information of the lane line in the multi-lane traffic monitoring scene.
The technical scheme of the invention is as follows: a method for detecting virtual and real lane lines in a traffic monitoring scene based on a multi-lane road monitoring video comprises the following steps:
1) acquiring continuous frames of a multi-lane road monitoring video, and generating a lane line image without a moving vehicle by adopting a mixed Gaussian model GMM;
2) setting a lane interesting region ROI for the obtained lane line image, converting the image into a binary image, and filtering and morphologically processing the converted image;
3) carrying out edge detection on the image obtained after the processing of the step 2) by adopting a Canny algorithm;
4) after the edge detection image in the step 3) is obtained, detecting straight line segments in the image based on probability Hough transformation PPHT, and obtaining end points and slope information of all the line segments;
5) performing K-Means clustering processing on the straight line segments obtained in the step 4), obtaining the centers of the classes to which the straight line segments belong, and classifying the straight line segments based on the clustering result, wherein one class corresponds to one lane line;
6) according to the classification result, judging and dividing left and right edges of line segments belonging to each lane line respectively based on the geometric characteristics of the lane lines, and extracting the endpoint information of each virtual lane line for the virtual lane lines;
7) and based on the left and right edge dividing results of the lane line obtained by the processing in the step 7), respectively fitting virtual lane lines and real lane lines, wherein the real lane lines are fitted by adopting a least square method, and the virtual lane lines are accurately fitted according to the obtained endpoint information, so that the final lane line detection result is obtained.
Step 1) generating a lane line image without a moving vehicle based on pixel processing mixed Gaussian background modeling, which comprises the following specific steps:
1.1) first acquiring video continuous frames, and processing the following for each frame: solving the sampling pixel point set x ═ x of the current time t frame1,x2,...,xnThe obeyed probability distribution function, and the parameters are initialized in advance: weight wk=w00, mean value μk=μ00 and covariance σk=σ00, where k denotes the number of the probability distribution pattern;
1.2) calculating x and the pixel mean μkIf the following equation is satisfied:
‖x-μk‖<τσk,k∈[1...K]
that is, it represents that the current frame to which the sampling point set belongs is matched with the pattern K, K is the number of probability distribution patterns, τ is a set threshold, and if matching, the following parameters are updated:
wk(t)=(1-α)wk(t-1)+αMk(t)
μk(t)=(1-β)μk(t-1)+βx
Figure BDA0001487920110000021
where t represents the instant of the current frame. Here, Mk(t) indicates whether the current frame is background, and when matching pattern k, i.e., the current frame belongs to background, Mk(t) ═ 1, otherwise 0; the learning rate α is a constant, β ═ a η (x; μ)kk),η(x;μkk) Is a normal density distribution function; if not, the parameter w is reinitialized without updatingk,μk,σkCalculating a next frame;
1.3) repeating the step 1.1) and the step 1.2) until all video frames are processed;
1.4) again according to wkkPerforming descending arrangement on each distribution mode, namely arranging the modes with large weight and small covariance in front;
1.5) finally, selecting the first B item mode as background,
Figure BDA0001487920110000022
wherein, KBThe first B modes are selected from K mode distributions, and lambda is a set threshold value.
And step 4) detecting straight line segments in the image by using the PPHT, wherein the PPHT represents straight lines in a rectangular coordinate system by using a polar coordinate system, one point in the rectangular coordinate system corresponds to one straight line in the polar coordinate system, if a plurality of straight lines in the polar coordinate system intersect at one point, and when the number of the intersected straight lines reaches the minimum vote number of the PPHT, the points in the rectangular coordinate system corresponding to the straight lines are positioned in the same straight line segment, so that the straight line segment is detected, and information of a starting point and an end point of the line segment is obtained.
The step 5) is specifically as follows:
giving a clustering center number K of a K-Means clustering algorithm, firstly converting an end point description mode of a detected line segment into a truncated form, namely a form of a slope b + intercept, and then performing K-Means clustering based on a (slope, intercept) pair, wherein the slope is the slope of the line segment, the intercept is the intercept, and the definition of the slope and the intercept is as follows:
Figure BDA0001487920110000031
intercept=endy-slope*endx
wherein (start)x,starty)、(endx,endy) Respectively the starting point and the ending point of the straight line segment.
The step 6) is specifically as follows:
calculating the distance between the end points of any two different line segments, and if the following conditions are met, determining the line segment l1And a line segment l2Edge line for the same lane line:
xdis=(l2.x-l1.x)∈(x_dislowthres,x_dishighthres)
ydis=(l2.y-l1.y)∈(y_dislowthres,y_dishighthres)
wherein (l)1.x,l1Y) and (l)2.x,l2Y) are respectively line segments l1And l2X _ dis oflowthres,x_dishighthres,y_dislowthres,y_dishighthresIs a preset threshold value between the left edge and the right edge of the lane line if the line segment l1And l2If the two end points are judged to belong to the same lane line, the coordinates of the two end points are used for judging whether the two end points belong to the left edge or the right edge:
Figure BDA0001487920110000032
Figure BDA0001487920110000033
in the formula, edgeleftAnd edgerightThe left edge and the right edge of the lane line are respectively, and after the edges are divided, the vertex information belonging to each virtual lane line can be extracted.
And 7) fitting the virtual lane line and the real lane line by adopting a least square method, and accurately fitting the virtual lane line according to the vertex information of the virtual lane line acquired in the step 6), wherein the vertex information is the starting point and the ending point of the left edge and the right edge.
The invention mainly provides a virtual and real lane line detection method for a traffic monitoring scene aiming at a multi-lane monitoring scene, which can effectively solve the problems that the traditional lane line detection method cannot cope with the shielding of lane lines, cannot divide the left and right edges of the lane lines and cannot extract accurate information of the virtual lane lines in the multi-lane scene. The method has important and profound significance for applications such as automatic driving, automatic calibration and the like based on accurate lane information.
Drawings
Fig. 1 is a flow chart of a method for detecting virtual and real lane lines in a traffic monitoring scene by using the method of the present invention.
Fig. 2 is a schematic diagram of the implementation of step 1) of the present invention, (a) shows an original video frame in a national road G2 scene, and (b) is a lane background map generated based on GMM.
FIG. 3 is a schematic diagram of the implementation of step 5) of the present invention, which shows a K-Means-based lane line clustering diagram in the case of national lane G2.
Fig. 4 is a schematic diagram of the implementation of step 6) of the present invention, (a) shows a schematic diagram of lane line information extraction, and (b) is a diagram of dividing left and right edges of a lane line in a national G2 scene.
Fig. 5 shows the final virtual-real lane line fitting results of the national road G2 and G205 scenes under the method of the present invention, where (a) is the national road G2 scene and (b) is the national road G205 scene.
Fig. 6 shows the final virtual-real lane line fitting results of the elevated scenes of the national G328 and the sky-oriented street in Nanjing under the method of the present invention, where (a) is the national G328 scene and (b) is the elevated scene.
Detailed Description
Firstly, generating a lane line image without a moving vehicle by adopting a mixed Gaussian model GMM, and setting a lane region of interest ROI on the image; then detecting straight line segments, namely lane lines, in a lane ROI based on a Canny algorithm and probability Hough transformation PPHT, clustering the line segments by utilizing a K-Means algorithm, classifying the lane lines based on a clustering result, dividing left and right edges of the line segments belonging to various lane lines respectively based on the geometric characteristics of the lane lines after classification, and extracting end point information of each virtual lane line for the virtual lane lines; and finally, fitting the virtual lane line and the real lane line respectively according to the left edge and the right edge of the lane line, thereby obtaining a final lane line detection result.
The invention is described in detail below with reference to the figures and specific examples.
Referring to fig. 1, the invention provides a method for detecting virtual and real lane lines of a traffic monitoring scene in a multi-lane monitoring scene, which comprises the following steps:
firstly, acquiring continuous frames of a multi-lane road monitoring video, and generating a lane background image without vehicles based on GMM.
Firstly, acquiring continuous frames of a video, and processing each frame as follows: solving the sampling pixel point set x ═ x of the current time t frame1,x2,...,xkThe obeyed probability distribution function, and the parameters are initialized in advance: weight wk=w00, mean value μk=μ00 and covariance σk=σ00, where k denotes the number of the probability distribution pattern;
next, calculate x and pixel mean μkIf the following equation is satisfied:
‖x-μk‖<τσk,k∈[1...K]
that is, it represents that the current frame to which the sample point set belongs is matched with a pattern K, where K is the number of probability distribution patterns, and τ is a set threshold, and the matching updates the following parameters:
wk(t)=(1-α)wk(t-1)+αMk(t)
μk(t)=(1-β)μk(t-1)+βx
Figure BDA0001487920110000051
where t represents the instant of the current frame. Here, Mk(t) indicates whether the current frame is background, and when matching pattern k, i.e., the current frame belongs to background, Mk(t) ═ 1, otherwise 0; the learning rate α is a constant, β ═ α η (x; μ)kk),η(x;μkk) Is a normal density distribution function; if not, the parameter w is reinitialized without updatingk,μk,σk
Repeating the first two steps until all the video frames are processed;
then according to wkkCarrying out descending order arrangement on the modes, wherein the mode with large weight and small covariance is arranged in front;
finally, the first B term mode is selected as background, and the calculation formula of B is as follows (wherein K isBRepresenting the selection of the first B modes from the K mode distributions, wherein lambda is a set threshold):
Figure BDA0001487920110000052
the original video frame and the lane background image generated based on the GMM are respectively as shown in fig. 2(a) and fig. 2 (b);
step (2), setting a lane ROI area for a lane background image, carrying out binarization, and then carrying out filtering and morphological processing on the converted image;
step (3), performing edge detection on the image obtained after the processing in the step (2) by adopting a Canny algorithm;
and (4) detecting straight lines in the image based on PPHT, wherein the PPHT represents the straight lines in the rectangular coordinate system by using a polar coordinate system. Generally, one point in the rectangular coordinate system corresponds to one straight line in the polar coordinate system, and if a plurality of straight lines in the polar coordinate system intersect at one point and the number of the intersecting straight lines reaches the minimum vote number of PPHT, it indicates that the points in the rectangular coordinate system corresponding to the straight lines are located in the same straight line segment. Thereby detecting the straight line segment and acquiring the endpoint information of the straight line segment.
And (5) performing K-Means clustering processing on the line segments obtained in the step (4), and classifying the line segments based on clustering results, wherein the clustering is to classify the line segments belonging to the same lane line into one class. The K-Means clustering algorithm classifies the data to be scored into each cluster based on the nearest neighbor principle aiming at a given clustering center number K, calculates the average value of each cluster to re-determine the center of mass of each cluster, and iterates until a certain condition is met. Firstly, converting the end point description mode of the line segment into a truncated mode: and a, performing K-Means clustering based on a (slope, interrupt) pair, wherein the slope is the slope of a line segment, and the interrupt is an intercept, and the definition of the slope and the intercept is as follows:
Figure BDA0001487920110000061
intercept=endy-slope*endx
wherein (start)x,starty)、(endx,endy) Respectively the starting point and the ending point of the straight line segment.
And (6) the line segments obtained by PPHT detection are the edges of the lane lines, and after the classification in the step (5), each line segment is divided and judged to be the left edge or the right edge of the lane line based on the geometric characteristics of the lane line, and the end point information of each lane line is extracted. The lane lines in the real world are composed of symmetrical left and right edges, and the parameters such as the distance between the left and right edges and the length of the left and right edges are fixed and unchanged, and the characteristics still exist after the lane lines are mapped to an image space. Thus, with this geometrical feature, the left and right edges of the lane line can be divided: calculating the distance between the end points of any two different line segments, if the following conditions are met, determining that the distance between the end points of any two different line segments is equal to the distance between the end points of any two different line segments1And l2Belong to same lane line:
xdis=(l2.x-l1.x)∈(x_dislowthres,x_dishighthres)
ydis=(l2.y-l1.y)∈(y_dislowthres,y_dishighthres)
wherein (l)1.x,l1Y) and (l)2.x,l2Y) are respectively line segments l1And l2X _ dis oflowthres,x_dishighthres,y_dislowthres,y_dishighthresIs a preset threshold value between the left and right edges of the lane line. This determination is mainly used for lane lines with segments, such as virtual lane lines, and the above calculation can determine whether two line segments belong to the same small virtual lane line. If line segment l1And l2If the two line segments are judged to belong to the same lane line, the coordinates of the end points of the two line segments are used for judging whether the line segment is the left edge or the right edge:
Figure BDA0001487920110000062
Figure BDA0001487920110000063
in the formula, edgeleftAnd edgerightRespectively the left and right edges of the lane line. After the edge division, 4 pieces of vertex information of line segments belonging to each virtual lane line can be extracted, and a lane line information extraction schematic diagram and an edge division result are shown in FIG. 4;
and (7) fitting the virtual lane line and the real lane line respectively based on the left and right edge dividing results of the lane line obtained by the processing in the step (6). The actual lane lines are fitted by the least square method, the virtual lane lines are accurately fitted according to the acquired vertex information, and the final lane line fitting result is shown in fig. 5 and 6.
The lane line detection is mainly used in the fields of automatic driving, automatic calibration, traffic monitoring and the like. Particularly, in the field of traffic monitoring, it is necessary to detect information of vehicles, such as the length and width of the vehicle, and driving states, such as vehicle speed, whether to cross a real lane line, etc., which puts higher and more accurate requirements on the detection of the lane line, that is, it is necessary to obtain information of the left and right edges and four end points of the lane line and calculate the length thereof as a reference, so as to detect the information of the vehicle. However, the conventional method only roughly detects the lane line, and the lane line is set as one line segment instead of a combination of a plurality of line segments in order to acquire the path of the lane line.
The method for detecting the virtual and real lane lines of the traffic monitoring scene can effectively realize the detection of multiple lane lines, the left and right edge division of the lane lines, the end point acquisition and the virtual and real lane line fitting. According to the method, firstly, GMM is adopted to extract lane line images without moving vehicles, the problem of shielding of lane lines in a road monitoring video is solved, secondly, straight line segments are detected based on a Canny algorithm and PPHT, K-Means clustering processing is carried out on all the straight line segments, then the left edge and the right edge are divided by utilizing the geometric characteristics of the lane lines, end point information of virtual lane lines is extracted, finally, the virtual lane lines and the real lane lines are fitted respectively, and specific information of the lane lines is extracted more accurately.

Claims (5)

1. A method for detecting virtual and real lane lines in a traffic monitoring scene is characterized in that the lane lines are detected based on a road monitoring video of a multi-lane road, and comprises the following steps:
1) acquiring continuous frames of a multi-lane road monitoring video, and generating a lane line image without a moving vehicle by adopting a mixed Gaussian model GMM;
2) setting a lane interesting region ROI for the obtained lane line image, converting the image into a binary image, and filtering and morphologically processing the converted image;
3) carrying out edge detection on the image obtained after the processing of the step 2) by adopting a Canny algorithm;
4) after the edge detection image in the step 3) is obtained, detecting straight line segments in the image based on probability Hough transformation PPHT, and obtaining end points and slope information of all the line segments;
5) performing K-Means clustering processing on the straight line segments obtained in the step 4), obtaining the centers of the classes to which the straight line segments belong, and classifying the straight line segments based on the clustering result, wherein one class corresponds to one lane line;
6) according to the classification result, judging and dividing left and right edges of line segments belonging to each lane line respectively based on the geometric characteristics of the lane lines, and extracting the endpoint information of each virtual lane line for the virtual lane lines; the method specifically comprises the following steps:
calculating the distance between the end points of any two different line segments, and if the following conditions are met, determining the line segment l1And a line segment l2Edge line for the same lane line:
xdis=(l2.x-l1.x)∈(x_dislowthres,x_dishighthres)
yais=(l2.y-l1.y)∈(y_dislowthres,y_dishighthres)
wherein (l)1.x,l1Y) and (l)2.x,l2Y) are respectively line segments l1And l2X _ dis oflowthres,x_dishighthres,y_dislowthres,y_dishighthresIs a preset threshold value between the left edge and the right edge of the lane line if the line segment l1And l2If the two end points are judged to belong to the same lane line, the coordinates of the two end points are used for judging whether the two end points belong to the left edge or the right edge:
Figure FDA0003188329140000011
Figure FDA0003188329140000012
in the formula, edgeleftAnd edgerightRespectively the left edge and the right edge of the lane line, and after the edges are divided, the vertex information belonging to each virtual lane line can be extracted;
7) and based on the left and right edge dividing results of the lane line obtained by the processing in the step 6), respectively fitting virtual lane lines and real lane lines, wherein the real lane lines are fitted by adopting a least square method, and the virtual lane lines are accurately fitted according to the obtained endpoint information, so that the final lane line detection result is obtained.
2. The method for detecting the virtual and real lane lines in the traffic monitoring scene according to claim 1, wherein step 1) generates the lane line image without moving vehicles based on mixed Gaussian background modeling of pixel processing, which comprises the following specific steps:
1.1) first acquiring video continuous frames, and processing the following for each frame: solving the sampling pixel point set x ═ x of the current time t frame1,x2,...,xnThe obeyed probability distribution function, and the parameters are initialized in advance: weight wk=w00, mean value μk=μ00 and covariance σk=σ00, where k denotes the number of the probability distribution pattern;
1.2) calculating x and the pixel mean μkIf the following equation is satisfied:
||x-μk||<τσk,k∈[1 ... K]
that is, it represents that the current frame to which the sampling point set belongs is matched with the pattern K, K is the number of probability distribution patterns, τ is a set threshold, and if matching, the following parameters are updated:
wk(t)=(1-α)wk(t-1)+aMk(t)
μk(t)=(1-β)μk(t-1)+βx
Figure FDA0003188329140000021
where t denotes the time of the current frame, Mk(t) indicates whether the current frame is background, and when matching pattern k, i.e., the current frame belongs to background, Mk(t) ═ 1, otherwise 0; the learning rate α is a constant, β ═ α η (x; μ)k,σk),η(x;μk,σk) Is a normal density distribution function; if not, the parameter w is reinitialized without updatingk,μk,σkCalculating a next frame;
1.3) repeating the step 1.1) and the step 1.2) until all video frames are processed;
1.4) again according to wkkPerforming descending arrangement on each distribution mode, namely arranging the modes with large weight and small covariance in front;
1.5) finally, selecting the first B item mode as background,
Figure FDA0003188329140000022
wherein, KBThe first B modes are selected from K mode distributions, and lambda is a set threshold value.
3. The method for detecting the virtual and real lane lines in the traffic monitoring scene as claimed in claim 1, wherein step 4) adopts PPHT to detect straight line segments in the image, the PPHT represents straight lines in a rectangular coordinate system by a polar coordinate system, one point in the rectangular coordinate system corresponds to one straight line in the polar coordinate system, if a plurality of straight lines in the polar coordinate system intersect at one point, and when the number of the intersecting straight lines reaches the minimum vote number of the PPHT, it is indicated that the points in the rectangular coordinate system corresponding to the straight lines are located in the same straight line segment, so as to detect the straight line segment, and obtain information of the start point and the end point of the line segment.
4. The traffic monitoring scene virtual and real lane line detection method according to claim 1, wherein the step 5) is specifically as follows:
giving a clustering center number K of a K-Means clustering algorithm, firstly converting an end point description mode of a detected line segment into a truncated form, namely a form of a slope b + intercept, and then performing K-Means clustering based on a (slope, intercept) pair, wherein the slope is the slope of the line segment, the intercept is the intercept, and the definition of the slope and the intercept is as follows:
Figure FDA0003188329140000023
intercept=endy-slope*endx
wherein (start)x,starty)、(endx,endy) Respectively the starting point and the ending point of the straight line segment.
5. The method for detecting the virtual and real lane lines in the traffic monitoring scene as claimed in claim 1, wherein step 7) is performed by fitting the virtual and real lane lines, the real lane lines are fitted by using a least square method, and the virtual lane lines are accurately fitted according to the vertex information of the virtual lane lines acquired in step 6), wherein the vertex information is the starting point and the ending point of the left edge and the right edge.
CN201711229332.7A 2017-11-29 2017-11-29 Virtual and real lane line detection method for traffic monitoring scene Active CN108052880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711229332.7A CN108052880B (en) 2017-11-29 2017-11-29 Virtual and real lane line detection method for traffic monitoring scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711229332.7A CN108052880B (en) 2017-11-29 2017-11-29 Virtual and real lane line detection method for traffic monitoring scene

Publications (2)

Publication Number Publication Date
CN108052880A CN108052880A (en) 2018-05-18
CN108052880B true CN108052880B (en) 2021-09-28

Family

ID=62121469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711229332.7A Active CN108052880B (en) 2017-11-29 2017-11-29 Virtual and real lane line detection method for traffic monitoring scene

Country Status (1)

Country Link
CN (1) CN108052880B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793356B (en) * 2018-09-30 2023-06-23 百度在线网络技术(北京)有限公司 Lane line detection method and device
CN111433780A (en) * 2018-11-29 2020-07-17 深圳市大疆创新科技有限公司 Lane line detection method, lane line detection apparatus, and computer-readable storage medium
CN111750878B (en) * 2019-03-28 2022-06-24 北京魔门塔科技有限公司 Vehicle pose correction method and device
CN110006440B (en) * 2019-04-12 2021-02-05 北京百度网讯科技有限公司 Map relation expression method and device, electronic equipment and storage medium
CN110163109B (en) * 2019-04-23 2021-09-17 浙江大华技术股份有限公司 Lane line marking method and device
CN110705342A (en) * 2019-08-20 2020-01-17 上海阅面网络科技有限公司 Lane line segmentation detection method and device
CN113255404A (en) * 2020-02-11 2021-08-13 北京百度网讯科技有限公司 Lane line recognition method and device, electronic device and computer-readable storage medium
CN111341103B (en) * 2020-03-03 2021-04-27 鹏城实验室 Lane information extraction method, device, equipment and storage medium
CN111488808B (en) * 2020-03-31 2023-09-29 杭州诚道科技股份有限公司 Lane line detection method based on traffic violation image data
CN113836978A (en) * 2020-06-24 2021-12-24 富士通株式会社 Road area determination device and method and electronic equipment
CN112307953A (en) * 2020-10-29 2021-02-02 无锡物联网创新中心有限公司 Clustering-based adaptive inverse perspective transformation lane line identification method and system
CN112507867B (en) * 2020-12-04 2022-04-22 华南理工大学 Lane line detection method based on EDLines line characteristics
CN115049994B (en) * 2021-02-25 2024-06-11 广州汽车集团股份有限公司 Lane line detection method and system and computer readable storage medium
CN113256665B (en) * 2021-05-26 2023-08-08 长沙以人智能科技有限公司 Method for detecting position relationship between motor vehicle and virtual and actual lines based on image processing
CN113822218A (en) * 2021-09-30 2021-12-21 厦门汇利伟业科技有限公司 Lane line detection method and computer-readable storage medium
CN113781482B (en) * 2021-11-11 2022-02-15 山东精良海纬机械有限公司 Method and system for detecting crack defects of mechanical parts in complex environment
CN114485716A (en) * 2021-12-28 2022-05-13 北京百度网讯科技有限公司 Lane rendering method and device, electronic equipment and storage medium
CN114724108B (en) * 2022-03-22 2024-02-02 北京百度网讯科技有限公司 Lane line processing method and device
CN114724117B (en) * 2022-06-07 2022-09-13 腾讯科技(深圳)有限公司 Lane line key point data generation method and device, electronic equipment and storage medium
CN115482477B (en) * 2022-09-14 2023-05-30 北京远度互联科技有限公司 Road identification method, device, unmanned aerial vehicle, equipment and storage medium
CN116503818A (en) * 2023-04-27 2023-07-28 内蒙古工业大学 Multi-lane vehicle speed detection method and system
CN117392634B (en) * 2023-12-13 2024-02-27 上海闪马智能科技有限公司 Lane line acquisition method and device, storage medium and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184535A (en) * 2011-04-14 2011-09-14 西北工业大学 Method for detecting boundary of lane where vehicle is
CN102663356A (en) * 2012-03-28 2012-09-12 柳州博实唯汽车科技有限公司 Method for extraction and deviation warning of lane line
US8520954B2 (en) * 2010-03-03 2013-08-27 Denso Corporation Apparatus for detecting lane-marking on road
CN103440649A (en) * 2013-08-23 2013-12-11 安科智慧城市技术(中国)有限公司 Detection method and device for lane boundary line
CN103617412A (en) * 2013-10-31 2014-03-05 电子科技大学 Real-time lane line detection method
CN103832433A (en) * 2012-11-21 2014-06-04 中国科学院沈阳计算技术研究所有限公司 Lane departure and front collision warning system and achieving method thereof
CN104751151A (en) * 2015-04-28 2015-07-01 苏州安智汽车零部件有限公司 Method for identifying and tracing multiple lanes in real time
CN105459919A (en) * 2014-09-30 2016-04-06 富士重工业株式会社 Vehicle sightline guidance apparatus
CN106803066A (en) * 2016-12-29 2017-06-06 广州大学 A kind of vehicle yaw angle based on Hough transform determines method
CN106991401A (en) * 2017-04-06 2017-07-28 大连理工大学 A kind of method for detecting lane lines based on K means clustering algorithms

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8520954B2 (en) * 2010-03-03 2013-08-27 Denso Corporation Apparatus for detecting lane-marking on road
CN102184535A (en) * 2011-04-14 2011-09-14 西北工业大学 Method for detecting boundary of lane where vehicle is
CN102663356A (en) * 2012-03-28 2012-09-12 柳州博实唯汽车科技有限公司 Method for extraction and deviation warning of lane line
CN103832433A (en) * 2012-11-21 2014-06-04 中国科学院沈阳计算技术研究所有限公司 Lane departure and front collision warning system and achieving method thereof
CN103440649A (en) * 2013-08-23 2013-12-11 安科智慧城市技术(中国)有限公司 Detection method and device for lane boundary line
CN103617412A (en) * 2013-10-31 2014-03-05 电子科技大学 Real-time lane line detection method
CN105459919A (en) * 2014-09-30 2016-04-06 富士重工业株式会社 Vehicle sightline guidance apparatus
CN104751151A (en) * 2015-04-28 2015-07-01 苏州安智汽车零部件有限公司 Method for identifying and tracing multiple lanes in real time
CN106803066A (en) * 2016-12-29 2017-06-06 广州大学 A kind of vehicle yaw angle based on Hough transform determines method
CN106991401A (en) * 2017-04-06 2017-07-28 大连理工大学 A kind of method for detecting lane lines based on K means clustering algorithms

Also Published As

Publication number Publication date
CN108052880A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108052880B (en) Virtual and real lane line detection method for traffic monitoring scene
CN109961049B (en) Cigarette brand identification method under complex scene
CN108549864B (en) Vehicle-mounted thermal imaging pedestrian detection-oriented region-of-interest filtering method and device
Hadi et al. Vehicle detection and tracking techniques: a concise review
WO2019196130A1 (en) Classifier training method and device for vehicle-mounted thermal imaging pedestrian detection
CN104951784B (en) A kind of vehicle is unlicensed and license plate shading real-time detection method
CN104268583B (en) Pedestrian re-recognition method and system based on color area features
Huang et al. Vehicle detection and inter-vehicle distance estimation using single-lens video camera on urban/suburb roads
Yuan et al. Robust lane detection for complicated road environment based on normal map
Kühnl et al. Monocular road segmentation using slow feature analysis
CN107256633B (en) Vehicle type classification method based on monocular camera three-dimensional estimation
GB2526658A (en) An efficient method of offline training a special-type parked vehicle detector for video-based on-street parking occupancy detection systems
CN112825192B (en) Object identification system and method based on machine learning
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN103400113B (en) Freeway tunnel pedestrian detection method based on image procossing
EP2813973B1 (en) Method and system for processing video image
CN110648342A (en) Foam infrared image segmentation method based on NSST significance detection and image segmentation
Siogkas et al. Random-walker monocular road detection in adverse conditions using automated spatiotemporal seed selection
CN107578048B (en) Vehicle type rough classification-based far-view scene vehicle detection method
CN109886168B (en) Ground traffic sign identification method based on hierarchy
CN107315998A (en) Vehicle class division method and system based on lane line
Rabiu Vehicle detection and classification for cluttered urban intersection
Xia et al. Vehicles overtaking detection using RGB-D data
CN114359876A (en) Vehicle target identification method and storage medium
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant