CN107977608B - Method for extracting road area of highway video image - Google Patents

Method for extracting road area of highway video image Download PDF

Info

Publication number
CN107977608B
CN107977608B CN201711155055.XA CN201711155055A CN107977608B CN 107977608 B CN107977608 B CN 107977608B CN 201711155055 A CN201711155055 A CN 201711155055A CN 107977608 B CN107977608 B CN 107977608B
Authority
CN
China
Prior art keywords
calculating
line segments
line segment
road
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711155055.XA
Other languages
Chinese (zh)
Other versions
CN107977608A (en
Inventor
杨博
张荣荣
董秋石
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tudou Data Technology Group Co ltd
Original Assignee
Tudou Data Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tudou Data Technology Group Co ltd filed Critical Tudou Data Technology Group Co ltd
Priority to CN201711155055.XA priority Critical patent/CN107977608B/en
Publication of CN107977608A publication Critical patent/CN107977608A/en
Application granted granted Critical
Publication of CN107977608B publication Critical patent/CN107977608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The invention discloses a method for extracting road regions of highway video images, which comprises the following steps: all possible line segments in the image are determined by the hough transform. Detected line segments are grouped into different classes. And then extracting the following normalized features of all the line segments, and selecting an optimal line segment group, wherein one or more possibly existing road edges exist in the group. The road edge of the M frames of images in a certain specific scene is obtained by the method. And calculating an edge distribution histogram of the M frames of images to obtain a main interval and an auxiliary interval of road distribution. And then extracting the value of the histogram as the first characteristic value of the two distribution intervals. Calculating the lengths of all straight lines in the interval, wherein the longest straight line is taken as the road edge of the continuous frame image; a road surface area is determined. The invention is suitable for automatic road detection in the picture of the road monitoring camera, and has the advantages of good real-time property, wide application scene, strong anti-interference capability and the like.

Description

Method for extracting road area of highway video image
Technical Field
The invention belongs to the technical field of artificial intelligence and image processing, and relates to a method applied to road video image road area extraction.
Background
In recent years, the intelligent transportation field is concerned by more and more researchers at home and abroad. In the aspect of algorithm: vehicle identification and tracking, vehicle automatic driving and road condition detection become research hotspots, and for algorithms applied in the fields, the method is vital to robustly and accurately extracting road regions, not only can reduce noise influence caused by non-interested regions, but also can greatly improve accuracy of subsequent algorithms.
The existing road extraction algorithm adopts RGB color characteristics and brightness characteristics after graying, and the method is easily influenced by illumination and environment; in addition, an image segmentation method of region growing is adopted, and the key of the method lies in the selection of seed points (growing starting point positions), so that the robustness is influenced; the classifier is obtained by adopting a machine learning mode, and roads and other regions in the image are identified, and the mode is limited by a large number of training samples and priori knowledge. In the aspect of application scenarios, some algorithms are only applicable to special environments such as culverts and tunnels, and are limited. Hardware-dependent road area extraction methods based on binocular cameras and the like would again bring increased costs.
Disclosure of Invention
The invention aims to overcome the defects of the traditional algorithm and provides an accurate and robust algorithm which can effectively detect the road surface area of the current lane in a road video image.
A method for extracting road regions of highway video images comprises the following steps:
step 1, preprocessing an acquired image to extract an image edge contour, determining all existing line segments in the image through Hough transform, and meanwhile, windowing the line segments according to two segments;
step 2, calculating slopes of the segments subjected to windowing processing respectively, arranging the obtained slopes of the segments in an ascending order, storing the slopes, and clustering the segments according to one-dimensional coordinate distribution of the slopes;
step 3, extracting the normalized features of all line segments of the clustered line segments: intra-class distribution statistical features; the number of classes is a ratio characteristic; inter-class slope population characteristics; intra-class directional distribution characteristics; an inter-class length distance feature; storing the five characteristics in a vector form to generate a characteristic vector;
step 4, giving weights to the extracted feature vectors, combining the feature vectors, and taking a certain group of line segments with the largest median of the combined vectors as an optimal line segment group;
and 5, removing the line segments with larger deviation from the acquired optimal line segment group, and then calculating the road edge of the current frame image according to the slope and length characteristics in the rest line segments.
In a preferred embodiment of the present invention, the method further comprises the following steps:
step 6, adjusting the equipment for obtaining the images to obtain a plurality of images; for the first M frames of the image, computing M (M is less than or equal to M) road edge line segments according to the steps 1 to 5, and computing a road edge distribution histogram of the first M frames according to the distribution statistical characteristics of the line segments;
step 7, calculating a main distribution interval and an auxiliary distribution interval for the road edge distribution histogram, comparing the main distribution interval and the auxiliary distribution interval, and selecting optimal distribution to obtain an accurate road edge line segment;
and 8, respectively extending the calculated road edge line segments to the image boundary and intersecting the image boundary, and defining a final road surface area.
In a preferred embodiment of the present invention, the step 1 further comprises the steps of:
step 11, performing graying processing on the input image, removing 1/10 the width and the height of the image, and further performing convolution by using a Gaussian kernel with a fixed size;
step 12, extracting canny edges of the convolution images, and storing all foreground gradient amplitude images;
step 13, carrying out Hough transform on the edge image to calculate line segment parts in all the images;
and step 14, for the line segment image, eliminating the line segments distributed at the central position, and further dividing the remaining line segments into a left window and a right window for processing.
In a preferred embodiment of the present invention, the step 2 further comprises the steps of:
the line segment images have N line segments;
step 21, recording coordinates of two ends of each line segment for N line segments, and further obtaining the slope k [ i ] of each line segment, wherein i belongs to [1, N ];
step 22, arranging the slope k (i) into a vector skv (i) in ascending order, calculating a forward difference vector diffv (i) and a clustering threshold vector threv (i), and calculating according to the following formula:
Figure GDA0003149560550000031
Figure GDA0003149560550000041
step 23, dynamically clustering line segments with similar slopes according to values of diffV (i) and threV (i) vectors, traversing skV (i) vectors, wherein skV [0] is the first line in the 1 st line segment, and then inspecting values of a difference vector and a threshold vector at a position corresponding to the following skV [1], and when the condition diffV [1] is less than the threV [1], clustering skV [1] and skV [0] into the same line segment; conversely, when diffV [1] is ≧ threV [1], the first type of segment clustering is ended, and skV [1] is set as the first of the 2 nd type of segment, and so on, until the vector skV (i) ends.
In a preferred embodiment of the present invention, the step 3 further comprises the steps of:
all line segments are copolymerized into a C class, C (i) represents the number of the line segments in each class, and i belongs to [1, C ];
step 31, calculating the statistical characteristics of the distribution in the class, and recording the abscissa of the endpoint of the right end of each line segment in the ith class
Figure GDA0003149560550000042
j∈[1,c(i)]. Normalization
Figure GDA0003149560550000043
Post-calculated mean(i)And standard deviation std(i)To std is paired(i)Is processed reversely to obtain X(i):
X(i)=1-std(i)
Step 32, calculating the number ratio characteristics among classes, and recording the number c of line segments in each class(i)To c for(i)Calculated according to the following formula to obtain N(i)
Figure GDA0003149560550000051
Step 33, calculating the overall characteristics of the slopes between classes, normalizing the slopes of all line segments, and recording c in each class according to the clustering result(i)Slope k of line segment(i)(j),j∈[1,c(i)]And further calculating the slope mean value in the class:
Figure GDA0003149560550000052
step 34, calculating the distribution characteristics of the intra-class directions, because the line segments are divided into left and right window areas, the line segments have different established directions for each window area, and according to the established directions, judging and recording c in each class(i)Slope k of line segment(i)(j) The correct number of symbols is correct(i)Calculating the feature A(i)
Figure GDA0003149560550000053
Step 35, calculating the length distance characteristics between classes, recording the start point coordinates and the end point coordinates of the N line segments, and calculating the length L [ i ] of the N line segments according to the block distance]After normalization, the data are classified into C class, L [ i ]]Can be re-expressed as l(i)(j) Where i denotes the category to which the current belongs, j ∈ [1, c ](i)](ii) a Characteristic B is calculated as follows(i)
Figure GDA0003149560550000054
In a preferred embodiment of the present invention, in the step 4:
for the extracted feature vector F (column vector), according to the influence degrees of the above five features, a weight vector alpha is given according to prior knowledge, and according to the formula: t ═ alphaTAnd combining the characteristic vectors by x F, and taking a certain group of line segments with the largest median of the characteristic vectors as an optimal line segment group.
In a preferred embodiment of the present invention, the step 5 further comprises the steps of:
step 51, calculating the road edge line segment in the optimal line segment group, assuming the ith class as the optimal line segment group, and sharing the line segment in the ith class
Figure GDA0003149560550000061
The horizontal coordinate of the end point of the right end of each line segment in the class is recorded
Figure GDA0003149560550000062
And the mean value mean calculated in step 31(i)And standard deviation std(i)Satisfies the following formula:
Figure GDA0003149560550000063
taking all the line segments as the edge line segments of the suspected road of the current frame, wherein mu is a constant, and taking mu as 1.2;
step 52, recording the length l calculated in step 33 in the suspected road edge line segment(i)(j) And the slope k calculated in step 35(i)(j) Given a weight (γ, δ), t is calculated according to the following equation(i)(j):
t(i)(j)=l(i)(j)×γ+k(i)(j)×δ;
When t is(i)(j) When the maximum value is obtained, obtaining the road edge of the current frame of the road;
in a preferred embodiment of the present invention, the step 6 further comprises the steps of:
step 61, calculating a line segment equation obtained by the previous M frames, and further calculating coordinates X (i) at the intersection points of the M line segments and the lower boundary of the image by using the line segment equation, wherein i belongs to [1, M ], and carrying out maximum and minimum value inhibition on the coordinates;
step 62, for x (i), setting a value range [1, 5] of the abscissa of the histogram, that is, the number Nbin of the histogram bin column is 5, further obtaining a span Wbin of the histogram bin column, and calculating according to the following formula:
Figure GDA0003149560550000071
and calculating the road edge distribution histogram of the previous M frames according to the parameters.
In a preferred embodiment of the present invention, the step 7 further comprises the steps of:
step 71, regarding each bin as a section of the road edge distribution histogram, calculating a section bin (u) max (bin (i)) with the largest value, i being [1, Nbin ], as a main distribution section, and taking a section with a value reaching 50% of the main distribution section as an auxiliary distribution section bin (v);
step 72, for bin (u) and bin (v), taking the values of bin H (u) and bin H (v) as normalization to be used as the first dimension characteristic of the interval, further recording all the slope mean values of the straight lines in the two intervals, bin K (u) and bin K (v), taking the values as the second dimension characteristic of the interval, and respectively giving different weights to the values
Figure GDA0003149560550000072
And combining the feature vectors according to the step 52, and further selecting the line segment with the longest length in the section with the larger value as the road edge.
In a preferred embodiment of the present invention, the step 8 further comprises the steps of:
step 81, calculating the distance between the far ends of the two line segments as l for the road edge, and moving the two edges to the center position by 1/10 × l distance respectively;
and step 82, setting boundary conditions, taking four vertexes of two road edges and image edges in the algorithm, calculating intersection points of the line segments and four sides of the image, further calculating intersection points of the line segments, and determining a final road surface area according to an intersection relation.
Drawings
Fig. 1 is a flow chart of the road video image road region extraction algorithm of the invention.
Fig. 2 is a flow chart of line segment group feature extraction.
FIG. 3 is a flow chart for determining a single frame road edge.
Fig. 4 is a flowchart of determining a continuous frame road edge distribution section.
Fig. 5 is a single frame image to be detected.
Fig. 6 is an image obtained by performing gaussian convolution on fig. 5.
Fig. 7 is an image of canny edge extraction results from fig. 6.
Fig. 8 is a result image of hough transform detection line segment of fig. 7.
Fig. 9(a) is an image in which the line segment in fig. 8 is decentralized.
Fig. 9(b) is a left windowing result image after the line segment in fig. 8 is de-centered.
FIG. 9(c) is a right windowed result image after the line segments in FIG. 8 are de-centered.
Fig. 10(a) is a diagram of obtaining an optimal segment group after left windowing.
Fig. 10(b) is a diagram of obtaining the optimal segment group after right windowing.
Fig. 11(a) shows the current road edge segment extracted by the left windowing.
Fig. 11(b) shows the current road edge segment extracted by the right windowing.
Fig. 12(a) is a continuous frame road edge left window distribution histogram.
Fig. 12(b) is a right windowing distribution histogram of road edges of consecutive frames.
The thick solid line in fig. 13(a) is the calculated left window optimal road edge.
The thick solid line in fig. 13(b) is the calculated right windowing optimal road edge.
Fig. 14 determines a road surface area after the processing.
Detailed Description
The present invention will be described in further detail below with reference to specific embodiments and with reference to the attached drawings.
Fig. 1 is a flowchart of the road video image road region extraction algorithm of the present invention, as shown in the figure, specifically including the following steps:
step 1, preprocessing an input image 5 by adopting a filtering method, extracting an image edge outline, determining all possible line segments in the image through Hough transform, reserving possible road edge images according to line segment position information, and processing the line segments according to left and right windowing.
The step 1 further comprises the following steps:
step 11, regarding the input image like fig. 5, in this embodiment, considering that the edge portion of the surveillance camera affected by the environment may generate distortion, 1/10 of the image width and the image height are removed after graying processing is performed, the remaining central portion is retained, and then a rectangular gaussian kernel with a fixed size of 9 is used for convolution to perform fuzzy denoising on the image, and fig. 6 is used for removing the boundary of fig. 5 and the denoised effect image after graying, and a stronger road edge portion is retained;
step 12, directly extracting canny edges of the convolution images in consideration of the influence caused by illumination, wherein fig. 7 is a binary image obtained by extracting the canny edges of the result of fig. 6 and stores all edge images with foreground gradient amplitudes meeting the requirements;
step 13, for the edge image, carrying out hough transform to reserve all straight line segment parts, and fig. 8 is a result image for detecting line segments of the edge image in fig. 7;
step 14, for the line segment image, removing the line segments distributed at the center position, in this embodiment, the center position is defined as:
Figure GDA0003149560550000101
in the equation, ImageCol is the image width, offset is ImageCol/4, and is the center region offset amount, and fig. 9a is an image obtained by decentralizing the line segment in fig. 8. The remaining segments are further divided into left and right two windows, with the left 2/3 portion of the image being the left window segment region and the right 2/3 portion of the image being the right window segment region, and fig. 9(b) (c) is the resulting image after left and right windowing of the segments in fig. 8.
And 2, calculating slopes of the segments subjected to windowing processing respectively, storing the segments after ascending order arrangement, and clustering the segments according to one-dimensional coordinate distribution of the slopes.
The step 2 further comprises the following steps:
the line segment images have N line segments.
Step 21, for N line segments, recording coordinates of two ends of each line segment, and further obtaining a slope of each line segment:
k[i]=(y2[i]-y1[i])/(x2[i]-x1[i]),i∈[1,N];
step 22, arranging the slope k (i) into a vector skv (i) in ascending order, calculating a forward difference vector diffv (i) and a clustering threshold vector threv (i), wherein threv (i) is 0.5 times of the value of the corresponding position of the skv (i) vector, and calculating according to the following formula:
Figure GDA0003149560550000111
Figure GDA0003149560550000112
step 23, dynamically clustering line segments with similar slopes according to values of diffV (i) and threV (i) vectors, traversing skV (i) vectors, wherein skV [0] is the first line in the 1 st line segment, and then inspecting values of a difference vector and a threshold vector at a position corresponding to the following skV [1], and when the condition diffV [1] is less than the threV [1], clustering skV [1] and skV [0] into the same line segment;
conversely, when diffV [1] is ≧ threV [1], the first type of segment clustering is ended, and skV [1] is set as the first of the type 2 segments, and so on, until the vector skV (i) ends.
Step 3, extracting the normalized features of all line segments of the clustered line segments: intra-class distribution statistical features; the number of classes is a ratio characteristic; inter-class slope population characteristics; intra-class directional distribution characteristics; inter-class length distance features.
Fig. 2 is a flow chart of line segment group feature extraction. The method specifically comprises the following steps:
let all segments co-polymerize to C class, C(i)Representing the number of segments in each class, i ∈ [1, C]。
Step 31, calculating the statistical characteristics of the distribution in the class, and recording each class in the ith classAbscissa of end point of right end of line segment
Figure GDA0003149560550000113
j∈[1,c(i)]. Normalization
Figure GDA0003149560550000114
Post-calculated mean(i)And standard deviation std(i)According to the following formula:
Figure GDA0003149560550000121
Figure GDA0003149560550000122
to std(i)Is processed reversely to obtain X(i)
X(i)=1-std(i)
X(i)Describing the degree of 'closeness' of the distribution of all line segments in the ith class as a first dimension of a feature vector;
step 32, calculating the number ratio characteristics among classes, and recording the number c of line segments in each class(i)To c for(i)Calculated according to the following formula to obtain N(i)
Figure GDA0003149560550000123
N(i)As a second dimension of the feature vector, ensuring that a cluster with more members obtains a larger feature value, which describes the quantity features of each class of members;
step 33, calculating the overall characteristics of the slopes between classes, normalizing the slopes of all line segments, and recording c in each class according to the clustering result(i)Slope k of line segment(i)(j),j∈[1,c(i)]And further calculating the slope mean value in the class:
Figure GDA0003149560550000131
K(i)saved as a third dimension feature vector, consider K(i)The closer the value is to 1, the more representative the road edge is;
step 34, calculating the distribution characteristics of the intra-class directions, because the line segments are divided into left and right window areas, the line segments have different established directions for each window area, and according to the established directions, judging and recording c in each class(i)Slope k of line segment(i)(j) The correct number of symbols is correct(i)Calculating the feature A(i)
Figure GDA0003149560550000132
A(i)The feature describes the quantity feature of the line segment with the correct direction in the class, and the quantity feature is stored as a fourth-dimensional feature vector;
step 35, calculating the inter-class length distance characteristics, recording the starting point coordinates (sX [ i ], sY [ i ]) and the end point coordinates (eX [ i ], eY [ i ]) of the N line segments based on the principle that the probability of extracting a line segment with a larger length is larger as the line segment is closer to the road edge, and calculating the length of the inter-class length distance according to the block distance:
L[i]=|eY[i]-sY[i]|+|eX[i]-sX[i]|,i∈[1,N]
mixing L [ i ]]After normalization, the data are divided into C class, L [ i ] according to clustering results]Can be re-expressed as l(i)(j) Where i denotes the category to which the current belongs, j ∈ [1, c ](i)](ii) a Characteristic B is calculated as follows(i)
Figure GDA0003149560550000141
B(i)The feature describes the length feature of the line segment between the classes, and the length feature is stored as a fifth-dimension feature vector.
And 4, giving weight values to the extracted feature vectors, and combining the feature vectors. And taking a certain group of line segments with the maximum median of the merged vectors as an optimal line segment group.
In the embodiment, for the extracted feature vector F (column vector), the weight vector α is given (α) according to the prior knowledgei)T,∑αi=1,i∈[1,5]. In an embodiment, the value of the vector α is (0.2, 0.3, 0.1, 0.1, 0.3)TAccording to the formula:
T=αT×F
and merging the feature vectors. When T takes the maximum value, a certain group of line segments is obtained as the optimal line segment group, and fig. 10a and 10b respectively obtain the optimal line segment group after windowing.
And 5, removing the line segment with larger position deviation from the acquired optimal line segment group, and then calculating the road edge of the current frame image according to the slope and length characteristics in the rest line segments.
FIG. 3 is a flow chart for determining a single frame road edge. The method specifically comprises the following steps:
step 51, calculating the road edge line segment in the optimal line segment group, assuming the ith class as the optimal line segment group, and sharing the line segment in the ith class
Figure GDA0003149560550000142
The horizontal coordinate of the end point of the right end of each line segment in the class is recorded
Figure GDA0003149560550000151
And the mean value mean calculated in step 31(i)And standard deviation std(i)Satisfies the following formula:
Figure GDA0003149560550000152
all the line segments of (1) are taken as the edge line segments of the suspected road of the current frame, wherein mu is a constant. μ ═ 1.2 in the examples;
step 52, recording the length l calculated in the step 35 in the suspected road edge line segment(i)(j) And the slope k calculated in step 33(i)(j) Given a weight (γ, δ), t is calculated according to the following equation(i)(j):
t(i)(j)=l(i)(j)×γ+k(i)(j)×δ;
When t is(i)(j) And obtaining the road edge of the current frame when the maximum value is obtained. In the embodiment, γ is 0.5, and fig. 11 is a diagram of the extracted effect of the current frame road edge line segment.
Step 6, considering that the road monitoring video environment belongs to field operation, and edges cannot be accurately extracted due to external influence in some frames, for a road video image, the road edges need to be determined by using statistical information of the previous M frames when a camera is started or after a pan-tilt rotates, M (M is less than or equal to M) road edge line segments shown in fig. 11 are calculated according to the steps 1 to 5, and a road edge distribution histogram is calculated according to the distribution statistical characteristics of the line segments. In the examples, M is 50.
The step 6 further comprises the following steps:
step 61, calculating the line segment equation obtained by the previous M frames as follows:
AX + BY + C is 0; then, calculating coordinates X (i) at the intersection point of the M line segments and the lower boundary of the image by using the line segment equation, wherein i belongs to [1, M ], and considering that the value of X (i) must be in a reasonable range, the histogram calculation is effective, so that the maximum minimum value suppression is carried out on the values;
step 62, for x (i), setting a value range [1, 5] of the abscissa of the histogram, that is, the number Nbin of the histogram bin column is 5, further obtaining a span Wbin of the histogram bin column, and calculating according to the following formula:
Figure GDA0003149560550000161
and calculating the road edge distribution histogram of the previous M frames according to the parameters. Fig. 12a and 12b are edge distribution histograms of M frame images after windowing.
And 7, calculating a main distribution interval and an auxiliary distribution interval for the road edge distribution histogram, comparing the main distribution interval and the auxiliary distribution interval, and selecting the optimal distribution to further obtain an accurate road edge line segment.
Fig. 4 is a flowchart of determining a continuous frame road edge distribution section. The method comprises the following specific steps:
step 71, regarding each bin as a section of the road edge distribution histogram, calculating a section bin (u) with the largest value, which is max (bin (i)), i being [1, Nbin ], as a main distribution section, and a section with values reaching 50% of the main distribution section as an auxiliary distribution section bin (v), that is, bin (v) > 0.5 × bin (u), in the embodiment, as shown in fig. 12a, the main section bin (1) and the auxiliary section bin (2). As shown in fig. 12b, the main zone bin (1), the sub zone bin (5);
step 72, for bin (u) and bin (v), taking the values of bin H (u) and bin H (v) as normalization to be used as the first dimension characteristic of the interval, further recording all the slope mean values of the straight lines in the two intervals, bin K (u) and bin K (v), taking the values as the second dimension characteristic of the interval, and respectively giving different weights to the values
Figure GDA0003149560550000171
In the examples where μ is 0.4,
Figure GDA0003149560550000172
and combining the feature vectors according to the step 52, and further selecting the line segment with the longest length in the section with the larger value as the road edge. The thick solid lines in fig. 13a and 13b are the calculated optimal road edges.
And 8, respectively extending the calculated road edge line segments to the image boundary and intersecting the image boundary, and defining a final road surface area.
Fig. 14 shows the road surface area of the current road detected in the scene shown in fig. 5, and the specific steps are as follows:
step 81, calculating the distance between the far ends of the two line segments as l for the road edge, and respectively moving the two edges to the center position by 1/10 lengths;
step 82, setting boundary conditions, and taking four vertexes of two road edges and image edges in the algorithm, wherein in the embodiment, 1/3 partial targets on the image and the road condition are fuzzy, and vehicles and other objects appearing in the area can not meet the detection requirement, so the following edges are adopted: the lower left and right edges remain unchanged, with the upper edge being taken to be 1/3 height of the original image. And calculating the intersection points of the line segments and the edges, further calculating the intersection points of the line segments, and determining the final road surface area according to the intersection relation.

Claims (9)

1. A method for extracting road regions from road video images is characterized by comprising the following steps:
step 1, preprocessing an acquired image to extract an image edge contour, determining all existing line segments in the image through Hough transform, and meanwhile, windowing the line segments according to two segments;
step 2, calculating slopes of the segments subjected to windowing processing respectively, arranging the obtained slopes of the segments in an ascending order, storing the slopes, and clustering the segments according to one-dimensional coordinate distribution of the slopes;
step 3, extracting the normalized features of all line segments of the clustered line segments: intra-class distribution statistical features; the number of classes is a ratio characteristic; inter-class slope population characteristics; intra-class directional distribution characteristics; an inter-class length distance feature; storing the five characteristics in a vector form to generate a characteristic vector;
step 4, giving weights to the extracted feature vectors, combining the feature vectors, and taking a certain group of line segments with the largest median of the combined vectors as an optimal line segment group;
step 5, removing the line segments with larger deviation from the obtained optimal line segment group, and then calculating the road edge of the current frame image according to the slope and length characteristics in the rest line segments;
step 6, adjusting the equipment for obtaining the images to obtain a plurality of images; for the first M frames of the image, computing M road edge line segments according to the steps 1 to 5, wherein M is less than or equal to M, and computing a road edge distribution histogram of the first M frames according to the distribution statistical characteristics of the line segments;
step 7, calculating a main distribution interval and an auxiliary distribution interval for the road edge distribution histogram, comparing the main distribution interval and the auxiliary distribution interval, and selecting optimal distribution to obtain an accurate road edge line segment;
and 8, respectively extending the calculated road edge line segments to the image boundary and intersecting the image boundary, and defining a final road surface area.
2. The method of claim 1, wherein step 1 further comprises the steps of:
step 11, performing graying processing on the input image, removing 1/10 the width and the height of the image, and further performing convolution by using a Gaussian kernel with a fixed size;
step 12, extracting canny edges of the convolution images, and storing all foreground gradient amplitude images;
step 13, carrying out Hough transform on the edge image to calculate line segment parts in all the images;
and step 14, for the line segment image, eliminating the line segments distributed at the central position, and further dividing the remaining line segments into a left window and a right window for processing.
3. The method of claim 1, wherein the step 2 further comprises the steps of:
the line segment images have N line segments;
step 21, recording coordinates of two ends of each line segment for N line segments, and further obtaining the slope k [ i ] of each line segment, wherein i belongs to [1, N ];
step 22, arranging the slope k (i) into a vector skv (i) in ascending order, calculating a forward difference vector diffv (i) and a clustering threshold vector threv (i), and calculating according to the following formula:
Figure FDA0003149560540000021
Figure FDA0003149560540000031
step 23, dynamically clustering line segments with similar slopes according to values of diffV (i) and threV (i) vectors, traversing skV (i) vectors, wherein skV [0] is the first line in the 1 st line segment, and then inspecting values of a difference vector and a threshold vector at a position corresponding to the following skV [1], and when the condition diffV [1] is less than threV [1], clustering skV [1] and skV [0] into the same line segment; conversely, when diffV [1] is ≧ threV [1], the first type of segment clustering is ended, and skV [1] is set as the first of the 2 nd type of segment, and so on, until the vector skV (i) ends.
4. The method of claim 1, wherein the step 3 further comprises the steps of:
let all segments co-polymerize to C class, C(i)Representing the number of segments in each class, i ∈ [1, C];
Step 31, calculating the statistical characteristics of the distribution in the class, and recording the abscissa of the endpoint of the right end of each line segment in the ith class
Figure FDA0003149560540000032
Normalization
Figure FDA0003149560540000033
Post-calculated mean(i)And standard deviation std(i)To std is paired(i)Is processed reversely to obtain X(i)
X(i)=1-std(i)
Step 32, calculating the number ratio characteristics among classes, and recording the number c of line segments in each class(i)To c for(i)Calculated according to the following formula to obtain N(i)
Figure FDA0003149560540000041
Step 33, calculating the overall characteristics of the slopes between classes, normalizing the slopes of all line segments, and recording c in each class according to the clustering result(i)Slope k of line segment(i)(j),j∈[1,c(i)]And further calculating the slope mean value in the class:
Figure FDA0003149560540000042
step 34, calculating the distribution characteristics of the intra-class directions, wherein the line segments are divided intoLeft and right window areas, the line segments have different predetermined directions for each sub-window area, and c in each category is judged and recorded according to the predetermined directions(i)Slope k of line segment(i)(j) The correct number of symbols is correct(i)Calculating the feature A(i)
Figure FDA0003149560540000043
Step 35, calculating the length distance characteristics between classes, recording the start point coordinates and the end point coordinates of the N line segments, and calculating the length L [ i ] of the N line segments according to the block distance]After normalization, the data are classified into C class, L [ i ]]Can be re-expressed as l(i)(j) Where i denotes the category to which the current belongs, j ∈ [1, c ](i)](ii) a Characteristic B is calculated as follows(i)
Figure FDA0003149560540000051
5. The method according to claim 1, wherein in step 4:
for the extracted characteristic column vector F, according to the influence degrees of the above five characteristics, a weight vector alpha is given according to prior knowledge, and according to the formula: t ═ alphaTAnd combining the characteristic vectors by x F, and taking a certain group of line segments with the largest median of the characteristic vectors as an optimal line segment group.
6. The method of claim 4, wherein the step 5 further comprises the steps of:
step 51, calculating the road edge line segment in the optimal line segment group, assuming the ith class as the optimal line segment group, and sharing the line segment in the ith class
Figure FDA0003149560540000052
The horizontal coordinate of the end point of the right end of each line segment in the class is recorded
Figure FDA0003149560540000053
And the mean value mean calculated in step 31(i)And standard deviation std(i)Satisfies the following formula:
Figure FDA0003149560540000054
taking all the line segments as the edge line segments of the suspected road of the current frame, wherein mu is a constant, and taking mu as 1.2;
step 52, recording the slope k calculated in step 33 in the suspected road edge line segment(i)(j) And length l calculated in step 35(i)(j) Given a weight (γ, δ), t is calculated according to the following equation(i)(j):
t(i)(j)=l(i)(j)×γ+k(i)(j)×δ;
When t is(i)(j) And when the maximum value is obtained, obtaining the road edge of the current frame of the road.
7. The method of claim 2, wherein the step 6 further comprises the steps of:
step 61, calculating a line segment equation obtained by the previous M frames, and further calculating coordinates X (i) at the intersection points of the M line segments and the lower boundary of the image by using the line segment equation, wherein i belongs to [1, M ], and carrying out maximum and minimum value inhibition on the coordinates;
step 62, for x (i), setting a value range [1, 5] of the abscissa of the histogram, that is, the number Nbin of the histogram bin column is 5, further obtaining a span Wbin of the histogram bin column, and calculating according to the following formula:
Figure FDA0003149560540000061
and calculating the road edge distribution histogram of the previous M frames according to the parameters.
8. The method of claim 6, wherein the step 7 further comprises the steps of:
step 71, regarding each bin as a section of the road edge distribution histogram, calculating a section bin (u) max (bin (i)) with the largest value, i being [1, Nbin ], as a main distribution section, and taking a section with a value reaching 50% of the main distribution section as an auxiliary distribution section bin (v);
step 72, for bin (u) and bin (v), taking the values of bin H (u) and bin H (v) as normalization to be used as the first dimension characteristic of the interval, further recording all the slope mean values of the straight lines in the two intervals, bin K (u) and bin K (v), taking the values as the second dimension characteristic of the interval, and respectively giving different weights to the values
Figure FDA0003149560540000071
And combining the feature vectors according to the step 52, and further selecting the line segment with the longest length in the section with the larger value as the road edge.
9. The method of claim 2, wherein the step 8 further comprises the steps of:
step 81, calculating the distance between the far ends of the two line segments as l for the road edge, and moving the two edges to the center position by 1/10 × l distance respectively;
and step 82, setting boundary conditions, taking four vertexes of two road edges and image edges in the algorithm, calculating intersection points of the line segments and four sides of the image, further calculating intersection points of the line segments, and determining a final road surface area according to an intersection relation.
CN201711155055.XA 2017-11-20 2017-11-20 Method for extracting road area of highway video image Active CN107977608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711155055.XA CN107977608B (en) 2017-11-20 2017-11-20 Method for extracting road area of highway video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711155055.XA CN107977608B (en) 2017-11-20 2017-11-20 Method for extracting road area of highway video image

Publications (2)

Publication Number Publication Date
CN107977608A CN107977608A (en) 2018-05-01
CN107977608B true CN107977608B (en) 2021-09-03

Family

ID=62010340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711155055.XA Active CN107977608B (en) 2017-11-20 2017-11-20 Method for extracting road area of highway video image

Country Status (1)

Country Link
CN (1) CN107977608B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104908A (en) * 2019-12-20 2020-05-05 北京三快在线科技有限公司 Road edge determination method and device
CN115546747B (en) * 2022-08-29 2023-09-19 珠海视熙科技有限公司 Road edge detection method and device, image pickup equipment and storage medium
CN116580032B (en) * 2023-07-14 2023-09-26 青岛西海岸城市建设集团有限公司 Quality monitoring method for road construction

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7260813B2 (en) * 2004-10-25 2007-08-21 Synopsys, Inc. Method and apparatus for photomask image registration
CN102629326A (en) * 2012-03-19 2012-08-08 天津工业大学 Lane line detection method based on monocular vision
CN103383733A (en) * 2013-05-16 2013-11-06 浙江智尔信息技术有限公司 Lane video detection method based on half-machine study
CN103489189A (en) * 2013-09-24 2014-01-01 浙江工商大学 Lane detecting and partitioning method based on traffic intersection videos
CN104657710A (en) * 2015-02-06 2015-05-27 哈尔滨工业大学深圳研究生院 Method for carrying out road detection by utilizing vehicle-borne single-frame image
CN105389561A (en) * 2015-11-13 2016-03-09 深圳华中科技大学研究院 Method for detecting bus lane based on video
CN105893949A (en) * 2016-03-29 2016-08-24 西南交通大学 Lane line detection method under complex road condition scene
CN105912977A (en) * 2016-03-31 2016-08-31 电子科技大学 Lane line detection method based on point clustering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4988786B2 (en) * 2009-04-09 2012-08-01 株式会社日本自動車部品総合研究所 Boundary line recognition device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7260813B2 (en) * 2004-10-25 2007-08-21 Synopsys, Inc. Method and apparatus for photomask image registration
CN102629326A (en) * 2012-03-19 2012-08-08 天津工业大学 Lane line detection method based on monocular vision
CN103383733A (en) * 2013-05-16 2013-11-06 浙江智尔信息技术有限公司 Lane video detection method based on half-machine study
CN103489189A (en) * 2013-09-24 2014-01-01 浙江工商大学 Lane detecting and partitioning method based on traffic intersection videos
CN104657710A (en) * 2015-02-06 2015-05-27 哈尔滨工业大学深圳研究生院 Method for carrying out road detection by utilizing vehicle-borne single-frame image
CN105389561A (en) * 2015-11-13 2016-03-09 深圳华中科技大学研究院 Method for detecting bus lane based on video
CN105893949A (en) * 2016-03-29 2016-08-24 西南交通大学 Lane line detection method under complex road condition scene
CN105912977A (en) * 2016-03-31 2016-08-31 电子科技大学 Lane line detection method based on point clustering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Real-time line detection through an improved Hough transform voting scheme;Leandro A.F. Fernandes 等;《Pattern Recognition》;ELSEVIER;20080131;第41卷(第1期);299-314 *
基于改进Hough变换的公路车道线快速检测算法;赵颖 等;《中国农业大学学报》;20060630(第3期);104-108 *

Also Published As

Publication number Publication date
CN107977608A (en) 2018-05-01

Similar Documents

Publication Publication Date Title
CN107330376B (en) Lane line identification method and system
Mammeri et al. Lane detection and tracking system based on the MSER algorithm, hough transform and kalman filter
CN106778551B (en) Method for identifying highway section and urban road lane line
CN102598057A (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
CN107895151A (en) Method for detecting lane lines based on machine vision under a kind of high light conditions
CN113221861B (en) Multi-lane line detection method, device and detection equipment
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
CN108052904B (en) Method and device for acquiring lane line
CN109886168B (en) Ground traffic sign identification method based on hierarchy
CN111435436B (en) Perimeter anti-intrusion method and device based on target position
CN107977608B (en) Method for extracting road area of highway video image
Rabiu Vehicle detection and classification for cluttered urban intersection
CN110706235A (en) Far infrared pedestrian detection method based on two-stage cascade segmentation
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN107122732B (en) High-robustness rapid license plate positioning method in monitoring scene
Ren et al. Lane detection in video-based intelligent transportation monitoring via fast extracting and clustering of vehicle motion trajectories
CN113221739B (en) Monocular vision-based vehicle distance measuring method
CN109063669B (en) Bridge area ship navigation situation analysis method and device based on image recognition
CN113053164A (en) Parking space identification method using look-around image
CN116229423A (en) Small target detection method in unmanned aerial vehicle based on improved Canny edge detection algorithm and SVM
CN106951831B (en) Pedestrian detection tracking method based on depth camera
CN109886120B (en) Zebra crossing detection method and system
CN116563659A (en) Optical smoke detection method combining priori knowledge and feature classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 504, Block E, HUanpu science and Technology Industrial Park, 211 tianguba Road, high tech Zone, Xi'an City, Shaanxi Province, 710000

Applicant after: Tudou Data Technology Group Co.,Ltd.

Address before: Room 504, Block E, HUanpu science and Technology Industrial Park, 211 tianguba Road, high tech Zone, Xi'an City, Shaanxi Province, 710075

Applicant before: SHAANXI TUDOU DATA TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant