CN111079541B - Road stop line detection method based on monocular vision - Google Patents

Road stop line detection method based on monocular vision Download PDF

Info

Publication number
CN111079541B
CN111079541B CN201911137093.1A CN201911137093A CN111079541B CN 111079541 B CN111079541 B CN 111079541B CN 201911137093 A CN201911137093 A CN 201911137093A CN 111079541 B CN111079541 B CN 111079541B
Authority
CN
China
Prior art keywords
straight line
points
initial
effective
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911137093.1A
Other languages
Chinese (zh)
Other versions
CN111079541A (en
Inventor
刘永刚
郭缘
熊周兵
文滔
陈峥
秦大同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201911137093.1A priority Critical patent/CN111079541B/en
Publication of CN111079541A publication Critical patent/CN111079541A/en
Application granted granted Critical
Publication of CN111079541B publication Critical patent/CN111079541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation

Abstract

The invention provides a road stop line detection method based on monocular vision. The detection method comprises the steps of road image information graying processing, Gaussian filtering smoothing processing, ROI setting, obtaining a gradient effective point grayscale image, obtaining a region growing source image, obtaining a region growing result image of effective points, screening an initial target straight line, determining the final position of a stop line and the like. The detection method can accurately identify the stop line in the road in real time, obtain the position information of the stop line in the picture, and obtain the position of the stop line in the actual road in real time by combining a coordinate conversion technology.

Description

Road stop line detection method based on monocular vision
Technical Field
The invention relates to the field of intelligent driving image processing target detection, in particular to a road stop line detection method based on monocular vision.
Background
The identification of the road traffic line is an important component of the intelligent automobile technology, the road traffic line has a plurality of types, and the stop line is an important traffic sign and generally appears at road intersections and the like. The accurate identification of the stop line can enable the vehicle to know the position of the intersection, so that the vehicle can decelerate in advance and run more normatively. The phenomenon that the traffic signal lamp is rushed by mistake and other vehicles and pedestrians are wiped can be avoided, and the driving safety of the intelligent automobile is improved. By combining other positioning technologies such as GPS and the like, the stop line identification can ensure accurate positioning in the driving process of the intelligent automobile, so that more accurate decision and control can be made. Therefore, accurate identification of the stop-line is one of the important components in achieving ADAS and automatic driving.
Disclosure of Invention
The invention aims to provide a road stop line detection method based on monocular vision to solve the problems in the prior art.
The technical scheme adopted for achieving the aim of the invention is that the road stop line detection method based on the monocular vision comprises the following steps:
1) and carrying out gray processing on the image information acquired by the vehicle-mounted monocular camera.
2) And setting the ROI according to the road traffic line distribution characteristics in the image.
3) And smoothing the gray level image in the ROI by utilizing Gaussian filtering.
4) And carrying out gray stretching on the image in the ROI to enhance the contrast of the image.
5) Calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j). The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix. Gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I.
Figure GDA0003309799940000011
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
Figure GDA0003309799940000021
In the formula, theta(i,j)Has a value range of [0 DEG, 180 DEG ]]。
6) The gradient mean and the gray level mean within the ROI are calculated.
Figure GDA0003309799940000022
Figure GDA0003309799940000023
In the formula, GyaverIs the mean value of the gradient in the y direction, gaverThe gray level mean value is, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area.
7) Determining a gradient judgment coefficient G in the y direction according to the change rule of the gradient and the gray level in the ROI areayjudgeThe gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge. Wherein, 90 degree is not less than thetajudge≥0°。
8) And judging the validity of the pixel points, and clearing the invalid points to obtain an initial valid gradient map. Wherein, the initial effective point has the following characteristics:
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) traversing the initial effective gradient map, and judging whether the number of the initial effective points in the initial effective point neighborhood is larger than or equal to the threshold value N of the interference pointeAnd removing the interference points to obtain a region growing source map.
10) And scanning the region growing source map from bottom to top according to the distribution characteristics of the road traffic lines to obtain a region growing initial seed point set.
11) And according to the seed coordinates in the initial seed set, performing region growth in the growth source graph to obtain a region growth result graph consisting of effective points containing lane information.
12) And carrying out hough transformation on the region growing result graph to obtain a straight line set in the region growing result graph.
13) And analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement.
14) And determining an initial scanning area according to the initial target straight line.
15) And scanning the initial scanning area, and obtaining a final scanning area according to a scanning result.
16) And scanning the number of effective points in the final scanning area, and calculating the percentage of the number of the effective points in the total number of the pixel points in the final scanning area. When the percentage of the effective points occupying the final scanning area is greater than or equal to the density threshold DthresAnd judging the primary target straight line as an effective straight line. And when the percentage of the effective points occupying the final scanning area is less than the density threshold, judging the primary target straight line as an invalid straight line, and determining that no stop line exists in the road at the moment.
17) The image coordinates of the valid initial target straight line are extracted, the stop line is identified at the coordinates, and the stop line position is displayed in the original image.
Further, in step 9), the number of the interference points is that the number of the effective points of the surrounding neighborhood is less than the threshold NeThe pixel point of (2).
Further, step 11) specifically comprises the following steps:
11.1) popping the seed points to obtain the coordinate information of the seed points.
11.2) traversing a neighborhood with a certain size around the seed point, if valid points exist in the neighborhood, stacking all the valid points in the neighborhood, and marking the valid points.
11.3) repeating the steps 11.1) and 11.2) until the stack is empty, and obtaining a region growing result graph consisting of effective points containing lane information.
Further, in step 13), the straight line information in the straight line set is coordinate information of two end points of all straight lines in the set. Calculating the length of each straight line segment and the included angle between the length of each straight line segment and the positive direction of x by utilizing coordinate information, and then calculating the length of each straight line segment and the included angle between each straight line segment and the positive direction of x according to a straight line length threshold value LthresAnd an angle threshold thetathresScreening of initial target straight line linit。linitThe straight line which satisfies the angle requirement and has the longest length in the straight line set.
Further, in step 14), the length and width of the initial scanning area are both greater than the range change amount of the x coordinate and the y coordinate of the initial target straight line, and the shape of the initial scanning area is a belt shape.
Further, in step 15), traversing the initial scanning area, finding the valid points at the two ends, and determining the length of the final scanning area, the slope of the initial target line, the set pixel width and the midpoint coordinate of the initial target line according to the difference of the x coordinates of the valid points at the two ends to determine the shape, size and position of the final scanning area.
The technical effects of the invention are undoubted:
A. the stop line in the road can be accurately identified in real time, the position information of the stop line in the picture is obtained, and the position of the stop line in the actual road can be obtained in real time by combining a coordinate conversion technology;
B. by selecting the specific region of interest, the processing region is greatly reduced, so that the interference is reduced, and the processing speed is improved;
C. the algorithm is simple, and the real-time performance of the system is greatly improved.
Drawings
FIG. 1 is a flow chart of a detection method;
FIG. 2 is a schematic view of an intersection stop line;
FIG. 3 is a schematic view of a camera image;
FIG. 4 is a diagram of a region growing source;
FIG. 5 is a schematic view of a scanning of a region growing source map;
FIG. 6 is a graph showing the result of ideal region growing;
FIG. 7 is a schematic view of an initial scanning area;
FIG. 8 is a schematic view of a final scan area;
FIG. 9 is a schematic view of stop-line identification I;
fig. 10 is a stop-line recognition diagram ii.
Detailed Description
The present invention is further illustrated by the following examples, but it should not be construed that the scope of the above-described subject matter is limited to the following examples. Various substitutions and alterations can be made without departing from the technical idea of the invention and the scope of the invention is covered by the present invention according to the common technical knowledge and the conventional means in the field.
Example 1:
referring to fig. 1, the present embodiment discloses a road stop line detection method based on monocular vision, including the following steps:
1) and carrying out gray processing on the image information acquired by the vehicle-mounted monocular camera. Wherein, the intersection stop line schematic diagram is shown in fig. 2. A schematic view of a camera image is shown in fig. 3.
2) And setting an ROI (region of interest) according to the distribution characteristics of the road traffic lines in the image, and reducing subsequent calculation amount.
3) And smoothing the gray level image in the ROI by utilizing Gaussian filtering.
4) And carrying out gray stretching on the image in the ROI to enhance the contrast of the image.
5) Calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j). The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix. Gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I.
Figure GDA0003309799940000051
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
Figure GDA0003309799940000052
6) Calculating a gradient mean value and a gray mean value in the ROI, wherein the gradient mean value in the y direction is the gray mean value, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area;
Figure GDA0003309799940000053
Figure GDA0003309799940000054
7) determining a gradient judgment coefficient G in the y direction according to the change rule of the gradient and the gray level in the ROI areajudgeAnd a gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge。90°≥θjudgeNot less than 0 degree. For example: gyjudge=1.2、gjudge1.1 and θjudge=20°。
8) And judging the validity of the pixel points, and clearing the invalid points to obtain an initial valid gradient map. Wherein the initial effective point has the following characteristics
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) Traversing the initial effective gradient map, and judging whether the number of the initial effective points in the initial effective point neighborhood is larger than or equal to a threshold value NeThus, the interference points are removed, and a region growing source map is obtained. Wherein the number of effective points of the interference point is smaller than the threshold N of the interference point for the surrounding neighborhood (3 x3 neighborhood in the example)e(N in this example)ePixel point of 3). Judging the number of effective points in neighborhood around the pixel point and the threshold value NeIf it is greater than or equal to NeThen the point is retained if less than NeThen the point is discarded and finally the region growing source map is obtained, see fig. 4.
10) And scanning the region growing source map from bottom to top according to the distribution characteristics of the road traffic lines to obtain a region growing initial seed point set. In this embodiment, the scan lines are located at x-direction coordinates c/36 × 27, c/36 × 9, see fig. 5. When the scanning line meets the effective point, the effective point is used as a seed point for subsequent region growth and is pressed into a seed stack, and when the number of seeds of one scanning line is more than 2, the scanning of the scanning line is stopped.
11) And according to the seed coordinates in the initial seed stack, performing region growth in the region growth source graph to obtain a region growth result graph consisting of effective points containing lane information. After the region growing, a region growing result map containing stop-line information is obtained, ideally as shown in fig. 6.
11.1) popping the seed points to obtain the coordinate information of the seed points.
11.2) traversing the 3x3 neighborhood around the seed point, if the valid point exists in the 3x3 neighborhood, stacking all the valid points in the neighborhood, and marking the valid points.
11.3) repeat steps 11.1) and 11.2) until the stack is empty.
12) And carrying out hough transformation on the region growing result graph to obtain a straight line set in the region growing result graph.
13) And analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement.
The straight line information in the straight line set is coordinate information of two end points of all straight lines in the set. Calculating the length L of each straight line segment by using coordinate informationlineAnd the angle theta between the positive direction of x and the angle thetalineThen according to a linear length threshold LthresAnd an angle threshold thetathresScreening of initial target straight line linit(in this example, Lthres50 pixel length, θthres=20°)。linitMeets the angle requirement (theta is more than or equal to 0 degree) in a straight line setline≤θthresOr 180 ° - θthres≤θlineA straight line with the longest length less than or equal to 180 degrees;
14) and determining an initial scanning area according to the initial target straight line. Using the coordinate information of the two end points of the initial target straight line, an initial scanning area with a proper width and length is determined (in this example, the length of the initial scanning area is the length of the ROI area, and the width is 60 pixels), effective points in the initial scanning area are scanned, effective points at the two ends in the area are found, and the coordinate information of the effective points is recorded, and the initial scanning area is as shown in fig. 7.
15) And obtaining a final scanning area according to the scanning result. Calculating an initial target straight line l according to the coordinate information of the initial target straight lineinitAccording to the initial target straight line linitSlope of (3), midpoint coordinate of initial scanning line, set width W of final scanning regionfinal(y-direction coordinate range) and the length determined by the difference of the x-direction coordinates of the two most significant points in the initial scanning area, as shown in fig. 8.
16) And traversing the final scanning area, counting the number of effective points in the scanning area, and calculating the percentage of the number of the effective points in the total number of the pixels in the final scanning area. When the percentage is greater than or equal to the density threshold DthresAnd then, judging that the initial target straight line is a valid straight line. When the percentage is less than the density threshold DthresAnd judging the primary target straight line as an invalid straight line, and determining that no stop line exists in the road at the moment.
17) And extracting the coordinate information of the effective initial target straight line, determining that the stop line is positioned at the position of the effective initial target straight line, and displaying the stop line in the original image.
Referring to fig. 9, 9a is a gaussian-filtered grayscale image, 9b is a stop-line recognition result image, 9c is a stop-line recognition result image, and 9d is a region growing result image. From 9d, it can be seen that the region growing result graph contains the stop-line information, and most of the disturbance information is removed. The recognition result of the algorithm is accurate as can be seen from 9b and 9 c. Referring to fig. 10, 10a is the identification result, 10b is the region growing source map with the interference points removed by the neighborhood removal threshold, 10c is the initial scanning region, and 10d is the final scanning region. It can be seen that the initial scan region is parallel to the x-axis since it does not take into account the slope of the initial target line. And the final scan region is determined considering the slope of the initial target line, so that the region is a slanted strip region, and since the difference in x-coordinates of the effective points at the two ends of the initial target region is the length of the ROI region, the length of the final scan region is equal to the length of the ROI region.
Example 2:
the embodiment discloses a basic road stop line detection method based on monocular vision, which comprises the following steps:
1) and carrying out gray processing on the image information acquired by the vehicle-mounted monocular camera.
2) And setting the ROI according to the road traffic line distribution characteristics in the image.
3) And smoothing the gray level image in the ROI by utilizing Gaussian filtering.
4) And carrying out gray stretching on the image in the ROI to enhance the contrast of the image.
5) Calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j). The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix. Gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I.
Figure GDA0003309799940000081
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
Figure GDA0003309799940000082
In the formula, theta(i,j)Has a value range of [0 DEG, 180 DEG ]]。
6) The gradient mean and the gray level mean within the ROI are calculated.
Figure GDA0003309799940000083
Figure GDA0003309799940000084
In the formula, GyaverIs the mean value of the gradient in the y direction, gaverThe gray level mean value is, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area.
7) Determining a gradient judgment coefficient G in the y direction according to the change rule of the gradient and the gray level in the ROI areayjudgeThe gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge. Wherein, 90 degree is not less than thetajudge≥0°。
8) And judging the validity of the pixel points, and clearing the invalid points to obtain an initial valid gradient map. Wherein, the initial effective point has the following characteristics:
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) traversing the initial effective gradient map, and judging whether the number of the initial effective points in the initial effective point neighborhood is larger than or equal to the threshold value N of the interference pointeAnd removing the interference points to obtain a region growing source map.
10) And scanning the region growing source map from bottom to top according to the distribution characteristics of the road traffic lines to obtain a region growing initial seed point set.
11) And according to the seed coordinates in the initial seed set, performing region growth in the growth source graph to obtain a region growth result graph consisting of effective points containing lane information.
12) And carrying out hough transformation on the region growing result graph to obtain a straight line set in the region growing result graph.
13) And analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement.
14) And determining an initial scanning area according to the initial target straight line.
15) And scanning the initial scanning area, and obtaining a final scanning area according to a scanning result.
16) And scanning the number of effective points in the final scanning area, and calculating the percentage of the number of the effective points in the total number of the pixel points in the final scanning area. When the percentage of the effective points occupying the final scanning area is greater than or equal to the density threshold DthresAnd judging the primary target straight line as an effective straight line. And when the percentage of the effective points occupying the final scanning area is less than the density threshold, judging the primary target straight line as an invalid straight line, and determining that no stop line exists in the road at the moment.
17) The image coordinates of the valid initial target straight line are extracted, the stop line is identified at the coordinates, and the stop line position is displayed in the original image.
Example 3:
the main steps of this embodiment are the same as those of embodiment 2, wherein, in step 9), the number of the interference points is smaller than the threshold N for the number of effective points in the surrounding neighborhoodeThe pixel point of (2).
Example 4:
the main steps of this embodiment are the same as those of embodiment 2, wherein step 11) specifically includes the following steps:
11.1) popping the seed points to obtain the coordinate information of the seed points.
11.2) traversing a neighborhood with a certain size around the seed point, if valid points exist in the neighborhood, stacking all the valid points in the neighborhood, and marking the valid points.
11.3) repeating the steps 11.1) and 11.2) until the stack is empty, and obtaining a region growing result graph consisting of effective points containing lane information.
Example 5:
the main steps of this embodiment are the same as those of embodiment 2, wherein, in step 13), the straight line information in the straight line set is the coordinate information of two end points of all straight lines in the set. Calculating the length of each straight line segment and the included angle between the length of each straight line segment and the positive direction of x by utilizing coordinate information, and then calculating the length of each straight line segment and the included angle between each straight line segment and the positive direction of x according to a straight line length threshold value LthresAnd an angle threshold thetathresScreening of initial target straight line linit。linitFor meeting an angle in a set of straight linesThe straight line with the longest length is required.
Example 6:
the main steps of this embodiment are the same as embodiment 2, wherein, in step 14), the length and width of the initial scanning area are both greater than the range variation of the x coordinate and the y coordinate of the initial target straight line, and the shape of the initial scanning area is a belt.
Example 7:
the main steps of this embodiment are the same as those of embodiment 2, wherein, in step 15), the initial scanning area is traversed, the valid points at the two ends are found, and the length of the final scanning area, the slope of the initial target straight line, the set pixel width and the midpoint coordinate of the initial target straight line are determined according to the difference of the x coordinates of the valid points at the two ends, so as to determine the shape, size and position of the final scanning area.

Claims (6)

1. A road stop line detection method based on monocular vision is characterized by comprising the following steps:
1) carrying out gray processing on image information acquired by the vehicle-mounted monocular camera;
2) setting an ROI according to the road traffic line distribution characteristics in the image;
3) smoothing the gray level image in the ROI by Gaussian filtering;
4) carrying out gray stretching on the image in the ROI to enhance the contrast of the image;
5) calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j)(ii) a The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix; gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I is obtained;
Figure FDA0003309799930000011
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
Figure FDA0003309799930000012
in the formula, theta(i,j)Has a value range of [0 DEG, 180 DEG ]];
6) Calculating a gradient mean value and a gray level mean value in the ROI;
Figure FDA0003309799930000013
Figure FDA0003309799930000014
in the formula, GyaverIs the mean value of the gradient in the y direction, gaverTaking the gray average value, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area;
7) determining a gradient judgment coefficient G in the y direction according to the change rule of the gradient and the gray level in the ROI areayjudgeThe gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge(ii) a Wherein, 90 degree is not less than thetajudge≥0°;
8) Judging the effectiveness of the pixel points, and clearing the invalid points to obtain an initial effective gradient map; wherein, the initial effective point has the following characteristics:
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) traversing the initial effective gradient map, and judging whether the number of the initial effective points in the initial effective point neighborhood is larger than or equal to the threshold value N of the interference pointeRemoving interference points to obtain a regional growth source map;
10) scanning a region growing source diagram from bottom to top according to the distribution characteristics of road traffic lines to obtain a region growing initial seed point set;
11) according to the seed coordinates in the initial seed set, carrying out region growth in a growth source graph to obtain a region growth result graph consisting of effective points containing lane information;
12) hough transformation is carried out on the region growing result graph, and a straight line set in the region growing result graph is obtained;
13) analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement;
14) determining an initial scanning area according to the initial target straight line;
15) scanning the initial scanning area, and obtaining a final scanning area according to a scanning result;
16) scanning the number of effective points in the final scanning area, and calculating the percentage of the number of the effective points in the total number of pixel points in the final scanning area; when the percentage of the effective points occupying the final scanning area is greater than or equal to the density threshold DthresJudging the primary target straight line as an effective straight line; when the percentage of the effective points occupying the final scanning area is less than the density threshold DthresJudging that the primary target straight line is an invalid straight line, and determining that no stop line exists in the road at the moment;
17) the image coordinates of the valid initial target straight line are extracted, the stop line is identified at the coordinates, and the stop line position is displayed in the original image.
2. The monocular vision-based road stop line detecting method according to claim 1, wherein: in step 9), the number of effective points of the surrounding neighborhood is smaller than a threshold value N as the interference pointseThe pixel point of (2).
3. The monocular vision-based road stop line detecting method according to claim 1 or 2, wherein: step 11) comprises the following steps:
11.1) popping the seed points to obtain coordinate information of the seed points;
11.2) traversing a neighborhood with a certain size around the seed point, if valid points exist in the neighborhood, stacking all the valid points in the neighborhood, and marking the valid points;
11.3) repeating the steps 11.1) and 11.2) until the stack is empty, and obtaining a region growing result graph consisting of effective points containing lane information.
4. The monocular vision-based road stop line detecting method according to claim 1, wherein: in step 13), the straight line information in the straight line set is the coordinate information of two end points of all straight lines in the set; calculating the length of each straight line segment and the included angle between the length of each straight line segment and the positive direction of x by utilizing coordinate information, and then calculating the length of each straight line segment and the included angle between each straight line segment and the positive direction of x according to a straight line length threshold value LthresAnd an angle threshold thetathresScreening of initial target straight line linit;linitThe straight line which satisfies the angle requirement and has the longest length in the straight line set.
5. The monocular vision-based road stop line detecting method according to claim 1, wherein: in step 14), the length and width of the initial scanning area are both larger than the range variation of the x coordinate and the y coordinate of the initial target straight line, and the shape of the initial scanning area is a belt.
6. The monocular vision-based road stop line detecting method according to claim 1, wherein: step 15), traversing the initial scanning area, searching the effective points at the two ends, and determining the length of the final scanning area, the slope of the initial target straight line, the set pixel width and the midpoint coordinate of the initial target straight line according to the difference value of the x coordinates of the effective points at the two ends to determine the shape, the size and the position of the final scanning area.
CN201911137093.1A 2019-11-19 2019-11-19 Road stop line detection method based on monocular vision Active CN111079541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911137093.1A CN111079541B (en) 2019-11-19 2019-11-19 Road stop line detection method based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911137093.1A CN111079541B (en) 2019-11-19 2019-11-19 Road stop line detection method based on monocular vision

Publications (2)

Publication Number Publication Date
CN111079541A CN111079541A (en) 2020-04-28
CN111079541B true CN111079541B (en) 2022-03-08

Family

ID=70311069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911137093.1A Active CN111079541B (en) 2019-11-19 2019-11-19 Road stop line detection method based on monocular vision

Country Status (1)

Country Link
CN (1) CN111079541B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330604B (en) * 2020-10-19 2021-08-10 香港理工大学深圳研究院 Method for generating vectorized road model from point cloud data
CN112712731B (en) * 2020-12-21 2022-08-12 阿波罗智联(北京)科技有限公司 Image processing method, device and system, road side equipment and cloud control platform
CN113091693B (en) * 2021-04-09 2022-08-05 天津大学 Monocular vision long-range distance measurement method based on image super-resolution technology

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130051681A (en) * 2011-11-10 2013-05-21 한국전자통신연구원 System and method for recognizing road sign
CN104504364B (en) * 2014-11-23 2017-10-10 北京联合大学 Stop line Real time identification and distance-finding method based on space time correlation
CN105160309B (en) * 2015-08-24 2018-12-07 北京工业大学 Three lanes detection method based on morphological image segmentation and region growing
KR20170052234A (en) * 2015-11-04 2017-05-12 현대모비스 주식회사 Method of crosswalk detection and location estimation
CN105740828B (en) * 2016-02-02 2019-07-19 大连楼兰科技股份有限公司 A kind of stopping line detecting method based on Fast Labeling connection
CN105740832B (en) * 2016-02-02 2019-06-07 大连楼兰科技股份有限公司 A kind of stop line detection and distance measuring method applied to intelligent driving
CN105740827B (en) * 2016-02-02 2019-07-19 大连楼兰科技股份有限公司 A kind of stop line detection and distance measuring method based on Fast Labeling connection
CN105740831B (en) * 2016-02-02 2019-06-07 大连楼兰科技股份有限公司 A kind of stopping line detecting method applied to intelligent driving
CN106250816A (en) * 2016-07-19 2016-12-21 武汉依迅电子信息技术有限公司 A kind of Lane detection method and system based on dual camera
CN106354135A (en) * 2016-09-19 2017-01-25 武汉依迅电子信息技术有限公司 Lane keeping system and method based on Beidou high-precision positioning
CN106503678A (en) * 2016-10-27 2017-03-15 厦门大学 Roadmarking automatic detection and sorting technique based on mobile laser scanning point cloud
CN106529505A (en) * 2016-12-05 2017-03-22 惠州华阳通用电子有限公司 Image-vision-based lane line detection method
CN108805060A (en) * 2018-05-29 2018-11-13 杭州视氪科技有限公司 A kind of zebra line style crossing detection method

Also Published As

Publication number Publication date
CN111079541A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111079541B (en) Road stop line detection method based on monocular vision
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
CN110378950B (en) Tunnel structure crack identification method based on gray level and gradient fusion
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN110210451B (en) Zebra crossing detection method
CN108921813B (en) Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision
CN110866430B (en) License plate recognition method and device
CN108052904B (en) Method and device for acquiring lane line
CN106407924A (en) Binocular road identifying and detecting method based on pavement characteristics
EP2580740A2 (en) An illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
CN113221861B (en) Multi-lane line detection method, device and detection equipment
CN108171695A (en) A kind of express highway pavement detection method based on image procossing
CN104700072A (en) Lane line historical frame recognition method
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation
CN110458019B (en) Water surface target detection method for eliminating reflection interference under scarce cognitive sample condition
CN109800641B (en) Lane line detection method based on threshold value self-adaptive binarization and connected domain analysis
WO2019149213A1 (en) Image-based road cone recognition method and apparatus, storage medium, and vehicle
CN106778766A (en) A kind of rotary digital recognition methods and system based on anchor point
CN113239733A (en) Multi-lane line detection method
CN112949482A (en) Non-contact type rail sleeper relative displacement real-time measurement method based on deep learning and visual positioning
CN115082701B (en) Multi-water-line cross identification positioning method based on double cameras
JP5327241B2 (en) Object identification device
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
Wennan et al. Lane detection in some complex conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant