CN110674732A - Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data - Google Patents

Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data Download PDF

Info

Publication number
CN110674732A
CN110674732A CN201910897749.3A CN201910897749A CN110674732A CN 110674732 A CN110674732 A CN 110674732A CN 201910897749 A CN201910897749 A CN 201910897749A CN 110674732 A CN110674732 A CN 110674732A
Authority
CN
China
Prior art keywords
lane
rut
line
points
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910897749.3A
Other languages
Chinese (zh)
Other versions
CN110674732B (en
Inventor
罗文婷
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Ruidao Engineering Technology Consulting Co Ltd
Fujian Agriculture and Forestry University
Original Assignee
Fujian Ruidao Engineering Technology Consulting Co Ltd
Fujian Agriculture and Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Ruidao Engineering Technology Consulting Co Ltd, Fujian Agriculture and Forestry University filed Critical Fujian Ruidao Engineering Technology Consulting Co Ltd
Priority to CN201910897749.3A priority Critical patent/CN110674732B/en
Publication of CN110674732A publication Critical patent/CN110674732A/en
Application granted granted Critical
Publication of CN110674732B publication Critical patent/CN110674732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/14Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/28Measuring arrangements characterised by the use of optical techniques for measuring areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to an automatic detection and positioning method for three-dimensional dimensions of a track on an asphalt pavement, which integrates multi-element data. Aiming at the automatic detection of the three-dimensional size of a road rut, firstly, preprocessing cross section line data through noise elimination and gradient correction; then, a method for positioning the rut characteristic points by establishing an undeformed axle of the road surface is provided; and finally, measuring the width, the maximum depth, the slope of the rut wall and the concave area of the rut according to the determined rut valley bottom point and edge point. Aiming at the automatic positioning of the road ruts, firstly, recognizing lane edge lines through a two-dimensional road image, and positioning lane center lines by combining the recognized lane edge lines; then measuring the positions of the left and right rut valley bottom points in the lane by taking the center line of the lane as a reference; and finally measuring the offset of the centers of the left and right tracks relative to the center of the lane. The invention provides more abundant information for detecting the road ruts.

Description

Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data
Technical Field
The invention relates to the field of automatic detection and positioning of track diseases of asphalt pavements, in particular to an automatic detection and positioning method for three-dimensional dimensions of tracks of asphalt pavements, which integrates multivariate data.
Background
The automatic measurement of the rut has been popularized and applied at home and abroad, and mainly obtains road surface deformation information and extracts rut characterization parameters by installing a laser displacement sensor at the front part or the tail part of a detection vehicle. The calculation method of the rut characterization parameters is different for different data acquisition devices (a point laser displacement sensor and a line scanning laser displacement sensor).
The point laser-based track measuring method is characterized in that a longitudinal height value of a specific position of a road surface is obtained through a plurality of point laser displacement sensors, so that the deformation characteristic of the road surface is described. However, limited spot lasers cannot acquire deformation information of the whole road surface, and generally only can measure the depth of a track, and a certain deviation exists between the depth of the track and the maximum depth of the track. The method for calculating the rut characterization parameters based on the point laser comprises a virtual rut method, a simulated ruler method and an envelope curve method. The virtual rutting method uses the average longitudinal height of the track area to define the rutting depth information, but the deviation of the rutting and the detecting vehicle can cause error. The simulated ruler method adopts a virtual ruler with a certain length to search the track end points and measure the track depth. However, the simulated straightedge with a fixed length cannot cover the end points of all the ruts, and errors can be caused. The envelope rule searches for rut end points by using a traversal algorithm, but deformation near the center line of a lane influences the accuracy of rut depth measurement.
The line scanning laser has higher point cloud sampling frequency, and approximately continuous road surface transverse section line information can be obtained. However, due to the high-density data acquisition frequency, the pavement micro information is also extracted, and the measurement of the ruts is influenced to a certain extent. The influence of the texture and the cracks of the pavement on the track measurement can be removed through a filtering algorithm. The line-scan laser can extract more rut characterization information than the point laser. However, much research is focused on the measurement of rutting morphology, and few researches are carried out on the extraction of rutting tracks. This is mainly because although the vehicle-mounted laser displacement sensor can describe the deformation state of the rut, the detection of the running deviation of the vehicle will result in the failure to accurately locate the rut track. The invention is to combine the laser cross section line data and the road surface image data which are synchronously collected, and acquire the rut track information by extracting the rut form characteristics and positioning the coordinates of the lane edge lines.
Disclosure of Invention
In view of this, the present invention provides an automatic detecting and positioning method for three-dimensional dimensions of asphalt pavement ruts, which integrates multivariate data, and provides richer information for detecting the ruts on the pavement, thereby improving the reliability of pavement operation.
The invention is realized by adopting the following scheme: a method for automatically detecting and positioning three-dimensional dimensions of a track on an asphalt pavement by fusing multivariate data comprises the following steps:
step S1: preprocessing the acquired original road surface transverse cutting line data;
step S2: carrying out gradient correction on the preprocessed cross sectional line data;
step S3: automatically extracting edge points and valley bottom points of left and right ruts of the road surface by adopting a traversal algorithm;
step S4: measuring rut dimensions, comprising: width, maximum depth, inner wall slope, depressed area, and left and right rut spacing;
step S5: recognizing lane edge lines through a two-dimensional road image to determine the position of a lane center line;
step S6: and positioning the left and right tracks according to the positions of the lane edge lines and the lane center points, and measuring the offset of the left and right tracks relative to the lane center line.
Further, the preprocessing of the original road surface cross-sectional line data in step S1 specifically includes the following steps:
step S11: extracting a splicing seam area section of the data center part of the original pavement transverse section line through a formula (1), and calculating the height difference between adjacent points of the splicing seam area section through a formula (2);
Figure BDA0002210193430000031
M=ys-ys-1(2)
in the formula, S represents a splicing seam area of a cross-sectional line data center; f (x)s,ys) Representing road surface cross-sectional line data; n represents the total pixel point of the single road surface transverse cutting line data; m represents the height difference between adjacent points of the splicing seam area, and the unit is mm;
step S12: searching and positioning jumping points of the splicing seam area through a formula (3); determining the transverse section line displacement for eliminating the splicing seam through a formula (4);
Figure BDA0002210193430000032
mk=yj-yj-1(4)
in the formula, Jk(xj,yj) A jump point representing a splice seam region;
l represents the crossline data collected by the left camera, and l is 1,2, …, n/2; r represents the crossline data collected by the right camera, r is (n/2) +1, (n/2) +2, …, n; m iskThe displacement of the transverse section line for eliminating the splicing seam is shown, and the unit is mm; k represents the number of jumping points of the splicing seam area;
step S13: moving up or down all points m displacement amounts on the horizontal cross-section line data after jumping points so as to eliminate splicing seams;
step S14: modeling by equation (5), calculating x using equation (6)iAnd xjThe distance between them;
Figure BDA0002210193430000033
dij=|xi-xj| (6)
wherein β represents a parameter of the transverse cross-sectional line x; ε represents the independently distributed random error; dijDenotes xiAnd xjThe distance between them; wherein i is 1,2, …, n; j is 1,2, …, n; x is the number ofi、xjRespectively are the horizontal coordinates of two points on the horizontal section line;
step S15: determining the size of a moving window for cross-section line smoothing processing through a formula (7), calculating weight values of all points in the moving window by using a formula (8), wherein the weights meet the requirements in a formula (9), and determining filtering parameters through formulas (10) and (11);
Figure BDA0002210193430000041
Figure BDA0002210193430000042
Figure BDA0002210193430000043
Figure BDA0002210193430000044
Figure BDA0002210193430000045
wherein r represents the width of the moving window; h isiDenotes dijThe r-th value of; w (x) represents a weight coefficient; k represents the number of points of the cross-sectional line in the moving window;
Figure BDA0002210193430000046
representing the filter parameters.
Further, the step S2 of performing gradient correction on the preprocessed crosscut line data specifically includes the following steps:
step S21: based on the characteristics that the number of times of rolling by tires on the edge of a lane is small and the deformation probability is small, taking 10cm sections at two ends of the cross section line data of the road surface, and fitting to establish a non-deformation horizontal axis of the road surface;
step S22: extracting an inclination angle acquired by an inertial navigation system as original data of road surface transverse gradient measurement; calculating the inclination angle of the detection vehicle relative to the road surface by combining the road surface transverse section line data measured by the three-dimensional line scanning laser system and the formula (12); finally, acquiring the real transverse gradient of the road surface by using a formula (13);
Figure BDA0002210193430000047
Sc=tan(θ)+tan(γ) (13)
in the formula, gamma represents the included angle between the detection vehicle and the road surface, and the unit is degree; theta represents an inclination angle acquired by the inertial navigation system, and the unit is degree; l represents the length of the cross section line data of the road surface, and the unit is mm; h isLThe average height value of the left 10cm section transverse section line data is shown, and the unit is mm; h isRThe average height value of the data of the transverse section line of the segment with the length of 10cm at the right end is expressed in mm; scThe unit of the transverse gradient of the road surface is m/m;
step S23: the translation transverse section line data comprises a road surface non-deformation shaft until the left end point coincides with the original point, and the transverse section line after translation is rotated around the original point until the gradient of the road surface non-deformation shaft is consistent with the road surface transverse gradient; acquiring gradient-corrected cross-sectional line data by equations (14) to (16);
x′=x·cosα+v·sinα (14)
v'=-x·sinα+v·cosα (15)
α=γ-arctan(Sc) (16)
wherein x' represents the abscissa of the cross sectional line data after slope correction, and the unit is mm; y' represents the ordinate of the cross-sectional line data after the slope correction, and the unit is mm; α represents a rotation angle in degrees.
Further, the step S3 specifically includes the following steps:
step S31: determining a bottom point of a rut valley: firstly, marking the middle point of a transverse section line, then traversing each point on the transverse section line data from the middle point to the left and the right respectively, and finally respectively setting two points with the maximum vertical distance from the left end to the right end to a non-deformation shaft as valley bottom points of left and the right ruts;
step S32: determination of the rut edge points: marking all intersection points of a transverse cutting line and a non-deformation shaft, traversing and searching from a valley bottom point of a left rut and a valley bottom point of a right rut to two ends respectively, and defining the intersection point of the left end and the right end, which is closest to the valley bottom point, as a rut edge point; for a road surface with a central depression, the inner edge point of a rut is defined as the point between two rut valley points and closest to the non-deformed axis.
Further, the specific content of step S4 is:
measuring the three-dimensional size of the rut comprises calculating the rut width according to the formula (17); calculating the maximum depth of the rut according to the formula (18); calculating the rut depression area according to formula (19); calculating the distance between the left track and the right track according to a formula (20); the slope of the inner wall of the track is the slope value of a fitted straight line of a cross-section line segment between the edge point of the track and the valley bottom point;
Figure BDA0002210193430000061
Figure BDA0002210193430000062
Figure BDA0002210193430000063
Figure BDA0002210193430000064
of formula (II) to (III)'jThe height value of the jth point on the horizontal section line data is expressed in mm; w represents the rut width in mm; x is the number ofOThe abscissa position of the outer edge point of the rut is represented in mm; x is the number ofIThe abscissa position of the edge point in the rut is represented in mm; d represents the maximum depth of the rut and is in mm; x is the number ofVThe horizontal coordinate position of the bottom point of the rut valley is expressed in mm; x is the number ofV1The horizontal coordinate of the bottom point of the inner track valley is expressed in mm; x is the number ofV2The horizontal coordinate of the valley bottom point of the outer vehicle is expressed, and the unit is mm; y isVThe height value of the bottom point of the rut valley on the cross section line data is expressed in mm; a represents the rut depression area in mm2(ii) a S represents the left and right rut spacing in mm.
Further, the step S5 of determining the lane center line position by recognizing the lane edge line through the two-dimensional road image, that is, the specific content of the lane edge line automatic recognition positioning algorithm, includes the following steps:
step S51: carrying out binarization processing on the collected 2D road surface image: firstly, filtering an image through wiener filtering; then, carrying out binarization processing on the image by utilizing the Otsu method, and determining a binarization threshold value of the image according to formulas (21) to (23);
Figure BDA0002210193430000065
Figure BDA0002210193430000066
Figure BDA0002210193430000071
in the formula, T represents an image binarization threshold value, and the value range is 0 to (L-1); l represents the maximum gray value in a single image; piRepresents ashThe pixel grid with the value i accounts for the percentage of the whole image,%; w is a0Represents the percentage of background area in the image,%; w is a1Represents the percentage of foreground area in the image,%;
step S52: lane edge line edge recognition: eliminating the influence of edge line paint removal and internal defect on the binary image obtained in the step S51 by using an expansion corrosion operation; then extracting lane edge lines by a Canny edge recognition algorithm;
if the edge lines of the lanes at the two ends are captured, executing step S53 to determine the center line of the lane; if the edge line of the lane at one end is captured, executing step S54 to determine the center line of the lane; if the edge lines of the two end lanes are not captured, executing step S55 to determine the center line of the lane;
the lane center line is a connection line of the center points of the cross section of the lane, the position of the lane center line is defined through the lane edge lines on the two sides, namely the connection line of the center points which are equidistant from the lane edge lines on the two sides is the lane center line;
step S53: determining the lane center line under the condition that the edge lines of the lanes at the two ends are captured: firstly, extracting the position information of the center lines of the edge lines of the left lane and the right lane, measuring the lane width, and then taking the connecting line of the middle points of the two center lines as the center line of the lane;
step S54: determining the lane center line under the condition that the edge lines of the lane at one end are captured: firstly, extracting the position information of the center line of the captured lane edge line, then taking one end edge line as a reference according to the lane width, and taking a connecting line of points which are away from the center 1/2 of the edge line at one end as the lane center line;
step S55: determining the lane center line under the condition that the edge lines of the lanes at the two ends are not captured: firstly, extracting the position information of the lane central line at the tail end of the image in the front image and the rear image; the lane center line position of the image without capturing the lane edge line is then defined according to the position of the front and rear image lane center lines.
Further, the specific content of step S6 is: determining the position information of the left and right ruts in the lane by using a formula (24) according to the positions of valley bottom points of the left and right ruts and the center line of the lane; the rut offset is the distance between the midpoint of the bottom point of the left and right rut valley and the center line of the lane, and is calculated by a formula (25),
Figure BDA0002210193430000081
Figure BDA0002210193430000082
in the formula, PviThe position of the bottom point of the rut valley relative to the central line of the lane is shown in mm; x is the number ofLCThe center line of the lane is positioned at the position of the transverse coordinate of the image, and the unit is mm; scThe unit of the transverse gradient of the road surface is m/m; DErutsRepresents the transverse offset of the rut, and the unit is mm; x is the number ofviIndicating the coordinate position of the rut valley bottom point on the crossline data in mm.
Compared with the prior art, the invention has the following beneficial effects:
the method is suitable for detecting the track diseases on the pavement, provides more track information compared with the traditional detection method, and simultaneously eliminates the influences of pavement upheaval, push wave and center subsidence and the detection of the lane deviation on the track disease detection. The invention can be applied to the maintenance work of road pavement, and provides richer information for detecting the ruts on the pavement, thereby improving the reliability of the running of the pavement.
Drawings
FIG. 1 is a cross-sectional road data preprocessing process diagram according to an embodiment of the present invention; wherein, fig. 1(a) is an original cross-sectional line data diagram, fig. 1(b) is a cross-sectional line data diagram for correcting a splicing seam, fig. 1(c) is a cross-sectional line data diagram for removing high-frequency noise, fig. 1(d) is a cross-sectional line data diagram for establishing a non-deformation horizontal axis, and fig. 1(e) is a rotated cross-sectional line data diagram.
FIG. 2 is a view of locating a rut feature point on a congested road surface according to an embodiment of the present invention; fig. 2(a) is a process diagram of midpoint labeling, fig. 2(b) is a process diagram of valley point searching, fig. 2(c) is a process diagram of intersection point determination, and fig. 2(d) is a process diagram of edge point location.
FIG. 3 is a diagram illustrating the positioning of rut feature points on a centrally recessed pavement in accordance with an embodiment of the present invention; fig. 3(a) is a process diagram of midpoint labeling, fig. 3(b) is a process diagram of valley point searching, fig. 3(c) is a process diagram of intersection point determination, and fig. 3(d) is a process diagram of edge point location.
Fig. 4 is a rut dimension measuring diagram of a road surface according to an embodiment of the invention, fig. 4(a) is a rut dimension measuring diagram of a congested road surface, and fig. 4(b) is a rut dimension measuring diagram of a depressed center road surface.
FIG. 5 is a diagram illustrating automatic lane edge line recognition according to an embodiment of the present invention; wherein, FIG. 5(a) is the collected original road surface picture, FIG. 5(b) is the road surface picture after binarization, FIG. 5(c) is the road surface picture of the edge detection lane edge line,
FIG. 5(d) is a road map locating the center line of the lane edge line.
FIG. 6 is a lane center line extraction plot for the case where both side lane edge lines are captured in accordance with an embodiment of the present invention; wherein the content of the first and second substances,
fig. 6(a) is a process diagram of lane center line extraction in which the edge lines of both the lanes are vertically photographed, fig. 6(b) is a process diagram of lane center line extraction in which the edge lines of both the lanes are obliquely photographed, and fig. 6(c) is a process diagram of lane center line extraction in which the edge lines of one of the lanes are partially photographed.
FIG. 7 is a lane center line extraction plot when the edge lines for one side of the lane are captured according to an example of the present invention; fig. 7(a) is a diagram showing a lane center line extraction process in which one-side lane edge lines are vertically photographed, fig. 7(b) is a diagram showing a lane center line extraction process in which one-side lane edge lines are obliquely photographed, and fig. 7(c) is a diagram showing a lane center line extraction process in which one-side lane edge lines are partially photographed.
Fig. 8 is a lane center line extraction diagram when neither of the two lane edge lines is captured according to the example of the present invention.
Fig. 9 is a road rut positioning diagram of an embodiment of the invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiment provides an automatic detection and positioning method for three-dimensional dimensions of an asphalt pavement rut by fusing multivariate data, which comprises the following steps:
step S1: preprocessing the acquired original road surface transverse cutting line data;
step S2: carrying out gradient correction on the preprocessed cross sectional line data;
step S3: automatically extracting edge points and valley bottom points of left and right ruts of the road surface by adopting a traversal algorithm;
step S4: measuring rut dimensions, comprising: width, maximum depth, inner wall slope, depressed area, and left and right rut spacing;
step S5: recognizing lane edge lines through a two-dimensional road image to determine the position of a lane center line;
step S6: and positioning the left and right tracks according to the positions of the lane edge lines and the lane center points, and measuring the offset of the left and right tracks relative to the lane center line.
In this embodiment, the preprocessing of the original road surface cross-sectional line data in step S1 specifically includes the following steps:
step S11: extracting a splicing seam area section of the data center part of the original pavement transverse section line through a formula (1), and calculating the height difference between adjacent points of the splicing seam area section through a formula (2);
Figure BDA0002210193430000101
M=ys-ys-1(2)
in the formula, S represents a splicing seam area of a cross-sectional line data center; f (x)s,ys) Representing road surface cross-sectional line data; n represents the total pixel point of the single road surface transverse cutting line data; m represents the height difference between adjacent points of the splicing seam area, and the unit is mm;
step S12: searching and positioning jumping points of the splicing seam area through a formula (3); referring to fig. 1(a), determining the transverse cross-sectional line displacement of the elimination of the splicing seam by formula (4);
Figure BDA0002210193430000111
mk=yj-yj-1(4)
in the formula, Jk(xj,yj) A jump point representing a splice seam region;
l represents the crossline data collected by the left camera, and l is 1,2, …, n/2; r represents the crossline data collected by the right camera, r is (n/2) +1, (n/2) +2, …, n; m iskThe displacement of the transverse section line for eliminating the splicing seam is shown, and the unit is mm; k represents the number of jumping points of the splicing seam area;
step S13: moving up or down all points m displacement amount on the transverse cross-section line data after jumping points, and the transverse cross-section line data after splicing seam elimination is shown in figure 1 (b);
step S14: modeling by equation (5), calculating x using equation (6)iAnd xjThe distance between them;
dij=|xi-xj| (6)
wherein β represents a parameter of the transverse cross-sectional line x; ε represents the independently distributed random error; dijDenotes xiAnd xjThe distance between them; wherein i is 1,2, …, n; j is 1,2, …, n; x is the number ofi、xjRespectively are the horizontal coordinates of two points on the horizontal section line;
step S15: determining the size of a moving window for cross-section line smoothing processing through a formula (7), calculating weight values of all points in the moving window by using a formula (8), wherein the weights meet the requirements in a formula (9), and determining filtering parameters through formulas (10) and (11); the effect of data smoothing is achieved, see fig. 1 (c).
Figure BDA0002210193430000113
Figure BDA0002210193430000121
Figure BDA0002210193430000122
Figure BDA0002210193430000123
Figure BDA0002210193430000124
Wherein r represents the width of the moving window; h isiDenotes dijThe r-th value of; w (x) represents a weight coefficient; k represents the number of points of the cross-sectional line in the moving window;
Figure BDA0002210193430000125
representing the filter parameters.
In this embodiment, the step S2 of performing gradient correction on the preprocessed crosscut line data specifically includes the following steps:
step S21: based on the characteristics that the number of times of rolling by tires on the edge of a lane is small and the deformation probability is small, taking 10cm sections at two ends of the cross section line data of the road surface, and fitting to establish a non-deformation horizontal axis of the road surface; see fig. 1 (d).
Step S22: extracting an inclination angle (Roll) acquired by an inertial navigation system as original data of road surface transverse gradient measurement; the original data is influenced by the inclination of the detection vehicle relative to the road surface, and certain errors exist. Calculating the inclination angle of the detection vehicle relative to the road surface by combining the road surface transverse section line data measured by the three-dimensional line scanning laser system and the formula (12); finally, acquiring the real transverse gradient of the road surface by using a formula (13);
Figure BDA0002210193430000126
Sc=tan(θ)+tan(γ) (13)
in the formula, gamma represents the included angle between the detection vehicle and the road surface, and the unit is degree; theta represents an inclination angle acquired by the inertial navigation system, and the unit is degree; l represents the length of the cross section line data of the road surface, and the unit is mm; h isLThe average height value of the left 10cm section transverse section line data is shown, and the unit is mm; h isRThe average height value of the data of the transverse section line of the segment with the length of 10cm at the right end is expressed in mm; scThe unit of the transverse gradient of the road surface is m/m;
step S23: the translation transverse section line data comprises a road surface non-deformation shaft until the left end point coincides with the original point, and the transverse section line after translation is rotated around the original point until the gradient of the road surface non-deformation shaft is consistent with the road surface transverse gradient; see fig. 1 (e). Acquiring gradient-corrected cross-sectional line data by equations (14) to (16);
x′=x·cosα+v·sinα (14)
v'=-x·sinα+v·cosα (15)
α=γ-arctan(Sc) (16)
wherein x' represents the abscissa of the cross sectional line data after slope correction, and the unit is mm; y' represents the ordinate of the cross-sectional line data after the slope correction, and the unit is mm; α represents a rotation angle in degrees.
In this embodiment, the step S3 specifically includes the following steps:
step S31: determination of rut valley bottom points was performed (see fig. 2): firstly, marking the middle point of a transverse section line, then traversing each point on the transverse section line data from the middle point to the left and the right respectively, and finally respectively setting two points with the maximum vertical distance from the left end to the right end to a non-deformation shaft as valley bottom points of left and the right ruts;
determination of the rut valley bottom point is made for rut pavements with central depression (see fig. 3): firstly, marking the midpoint of a transverse section line, then traversing each point on the transverse section line data from the midpoint to the left and the right respectively, and finally respectively setting two points with the maximum vertical distance from the left end to the right end to a non-deformation shaft as valley bottom points of left and right ruts;
step S32: determination of the rut edge points was performed (see fig. 2): marking all intersection points of transverse sectional lines and non-deformation axes, traversing and searching from the valley bottom points of the left and right ruts to two ends respectively, and defining the intersection points of the left and right ends closest to the valley bottom points as rut edge points.
The determination of the rut edge points is carried out for rut pavements with central depressions (see fig. 3): marking all intersection points of a transverse cutting line and a non-deformation shaft, traversing and searching from a valley bottom point of a left track and a valley bottom point of a right track to the outer end of the track respectively, and defining the intersection point of the outer end, which is closest to the valley bottom point, as an outer edge point of the track; and searching for the point between the valley bottom points of the left and right ruts, wherein the point closest to the non-deformation shaft is the common inner edge point of the left and right ruts.
In this embodiment, the specific content of step S4 is:
measuring the three-dimensional size of the rut comprises calculating the rut width according to the formula (17); calculating the maximum depth of the rut according to the formula (18); calculating the rut depression area according to formula (19); calculating the distance between the left track and the right track according to a formula (20); the slope of the inner wall of the track is the slope value of a fitted straight line of a cross-section line segment between the edge point of the track and the valley bottom point; the definition of the dimensions of each rut is shown in FIG. 4.
Figure BDA0002210193430000141
Figure BDA0002210193430000142
Figure BDA0002210193430000143
Of formula (II) to (III)'jThe height value of the jth point on the horizontal section line data is expressed in mm; w represents the rut width in mm; x is the number ofOThe abscissa position of the outer edge point of the rut is represented in mm; x is the number ofIThe abscissa position of the edge point in the rut is represented in mm; d represents the maximum depth of the rut and is in mm; x is the number ofVThe horizontal coordinate position of the bottom point of the rut valley is expressed in mm; x is the number ofV1The horizontal coordinate of the bottom point of the inner track valley is expressed in mm; x is the number ofV2The horizontal coordinate of the valley bottom point of the outer vehicle is expressed, and the unit is mm; y isVThe height value of the bottom point of the rut valley on the cross section line data is expressed in mm; a represents the rut depression area in mm2(ii) a S represents the left and right rut spacing in mm.
In this embodiment, the step S5 of determining the lane center line position by recognizing the lane edge line through the two-dimensional road image, that is, the specific content of the lane edge line automatic recognition positioning algorithm, includes the following steps:
step S51: carrying out binarization processing on the collected 2D road surface image: firstly, filtering an image through wiener filtering; then, the image is binarized by using the Otsu method, and the threshold value for image binarization is determined according to equations (21) to (23), as shown in FIG. 5.
Figure BDA0002210193430000152
Figure BDA0002210193430000153
In the formula, T represents an image binarization threshold value, and the value range is 0 to (L-1); l represents the maximum gray value in a single image; piExpressing the percentage of pixel grids with a gray value of i to the whole image,%;w0represents the percentage of background area in the image,%; w is a1Represents the percentage of foreground area in the image,%;
step S52: lane edge line edge recognition: eliminating the influence of edge line paint removal and internal defect on the binary image obtained in the step S51 by using an expansion corrosion operation; then extracting lane edge lines by a Canny edge recognition algorithm;
if the edge lines of the lanes at the two ends are captured, executing step S53 to determine the center line of the lane; if the edge line of the lane at one end is captured, executing step S54 to determine the center line of the lane; if the edge lines of the two end lanes are not captured, executing step S55 to determine the center line of the lane;
the lane center line is a connection line of the center points of the cross section of the lane, the position of the lane center line is defined through the lane edge lines on the two sides, namely the connection line of the center points which are equidistant from the lane edge lines on the two sides is the lane center line;
step S53: determining the lane center line under the condition that the edge lines of the lanes at the two ends are captured: firstly, the position information of the center lines of the edge lines of the left lane and the right lane is extracted, the lane width is measured, and then the connecting line of the middle points of the two center lines is taken as the center line of the lane, as shown in fig. 6.
Step S54: determining the lane center line under the condition that the edge lines of the lane at one end are captured: first, the position information of the center line of the captured lane edge line is extracted, and then according to the lane width, the connecting line of the points which are apart from the center 1/2 of the edge line at one end is taken as the lane center line by taking the edge line at one end as the reference, as shown in fig. 7.
Step S55: determining the lane center line under the condition that the edge lines of the lanes at the two ends are not captured: firstly, extracting the position information of the lane central line at the tail end of the image in the front image and the rear image; the lane center line position of the image without capturing the lane edge line is then defined according to the position of the front and rear image lane center lines, see fig. 8.
In this embodiment, the specific content of step S6 is: determining the position information of the left and right ruts in the lane by using a formula (24) according to the positions of valley bottom points of the left and right ruts and the center line of the lane; the rut offset is the distance between the midpoint of the bottom point of the left and right rut valley and the center line of the lane, and is calculated by a formula (25), and the specific definition is shown in fig. 9.
Figure BDA0002210193430000161
Figure BDA0002210193430000162
In the formula, PviThe position of the bottom point of the rut valley relative to the central line of the lane is shown in mm; x is the number ofLCThe center line of the lane is positioned at the position of the transverse coordinate of the image, and the unit is mm; scThe unit of the transverse gradient of the road surface is m/m; DErutsRepresents the transverse offset of the rut, and the unit is mm; x is the number ofviIndicating the coordinate position of the rut valley bottom point on the crossline data in mm.
Preferably, a transverse section line data splicing seam elimination algorithm is designed in the embodiment, and a middle seam of transverse section line data collected and spliced by two groups of equipment is eliminated; then, integrating inertial navigation data and transverse section line data to calculate the road surface gradient, establishing a road surface non-deformation shaft, and rotating the transverse section line data and the non-deformation shaft together until the non-deformation shaft is consistent with the road surface gradient; according to the established road surface undeformed shaft, different determination methods of the edge points and the valley bottom points of the ruts are provided respectively for the road surface with deformation characteristics such as wave congestion and central depression; and finally, measuring the width, the maximum depth, the slope of a rut wall, the depressed area and the distance between the left rut and the right rut valley by combining the determined rut edge points and the determined valley bottom points.
Preferably, in the embodiment, the collected two-dimensional image of the original pavement is subjected to filtering and denoising processing, then the image is binarized, and the edge information of the lane line is extracted by using a Canny edge recognition algorithm; then, according to the extracted lane edge line position information, determining the position information of the lane center line on the picture; determining the position of the rut valley bottom point relative to the center of the lane by combining the rut valley bottom point and the position information of the lane center line on the transverse sectional line and the image; and finally measuring the offset of the centers of the left and right tracks relative to the center of the lane.
Preferably, the embodiment of the present invention is as follows:
(1) device parameters and working principle thereof
In the embodiment, the road detection vehicle integrating the three-dimensional line scanning laser system and the inertial navigation system is used for collecting road surface data. The integrated system can simultaneously acquire two-dimensional pavement image data, three-dimensional pavement image data (consisting of pavement cross-sectional line data with the interval of 1 mm) and inertial navigation data which are completely matched. The precision of the two-dimensional pavement image data is 1 mm; the transverse and longitudinal direction precision of the three-dimensional pavement image data is 1mm, and the vertical direction precision is 0.3 mm; the acquisition interval of inertial navigation data is 10 cm. The three-dimensional line scanning laser system comprises two industrial cameras and two line scanning lasers, wherein images collected by each camera can cover the range of 2m multiplied by 2m of a road surface, and after the images collected by the cameras at the left end and the right end are spliced, a single image can cover the transverse 4m width and the longitudinal 2m length of the road surface. The acquisition speed of the system can reach 100km/h, and the acquired data is not influenced by illumination.
(2) Original pavement transverse cutting line data preprocessing
The original transverse cutting line data has splicing seams and high-frequency noise, which can influence the measurement of the ruts and needs to be preprocessed firstly. Firstly, extracting a splicing seam area segment of the central part of the cross-sectional line data, and calculating the height difference between adjacent points of the splicing seam area segment; then searching and positioning jump points of the splicing seam area, and determining the transverse cross-sectional line displacement eliminated by the splicing seam; shifting up or down all points m on the transverse cross-section line data after jumping points, and eliminating the splicing seams of the transverse cross-section line data; finally, smoothing the horizontal cutting line data after the splicing seam is eliminated;
(3) road surface cross section data slope correction
The slope of the cross-sectional line data acquired by the line scanning laser system reflects the inclination angle between the detection vehicle and the road surface and does not represent the actual cross slope of the road surface. Therefore, it is necessary to perform gradient correction on the crosscut line data so as to be consistent with the actual transverse gradient of the road surface. Firstly, based on the characteristics that the number of times of rolling by tires is small and the deformation probability is small at the edge of a lane, taking 10cm sections at two ends of the cross section line data of the road surface, and fitting to establish a non-deformation horizontal axis of the road surface; and then measuring the actual transverse gradient of the road surface by combining inertial navigation data and transverse section line data: extracting an inclination angle (Roll) acquired by an inertial navigation system as original data of road surface transverse gradient measurement, calculating an inclination angle of the detection vehicle relative to the road surface through road surface transverse section line data measured by line scanning laser, and acquiring a real transverse gradient of the road surface; and translating the transverse section line data (including the road surface non-deformation shaft) until the left end point coincides with the original point, and rotating the transverse section line after translation around the original point until the gradient of the road surface non-deformation shaft is consistent with the road surface transverse gradient.
(4) Rut edge point and valley point search
Determining the rut edge points and the valley bottom points is a prerequisite for rut measurement. In this embodiment, different algorithms are respectively designed for searching the edge points and the valley points of the ruts in the two situations of the rut road surface, namely the congestion wave and the central depression. Aiming at the track pavement with the congestion waves, determining a bottom point of a track valley: firstly, marking the middle point of a transverse section line, then traversing each point on the transverse section line data from the middle point to the left and the right respectively, and finally respectively setting two points with the maximum vertical distance from the left end to the right end to a non-deformation axis as valley bottom points of left and the right ruts. Determining the track edge points of the track pavement with the congestion waves: marking all intersection points of transverse sectional lines and non-deformation axes, traversing and searching from the valley bottom points of the left and right ruts to two ends respectively, and defining the intersection points of the left and right ends closest to the valley bottom points as rut edge points. For a rutting road surface with a central depression, determining a rutting valley bottom point: firstly, marking the middle point of a transverse section line, then traversing each point on the transverse section line data from the middle point to the left and the right respectively, and finally, respectively setting the two points with the maximum vertical distance from the left end to the right end to the non-deformation axis as valley bottom points of the left and the right ruts. For a rut road surface with a central depression, determining rut edge points: marking all intersection points of a transverse cutting line and a non-deformation shaft, traversing and searching from a valley bottom point of a left track and a valley bottom point of a right track to the outer end of the track respectively, and defining the intersection point of the outer end, which is closest to the valley bottom point, as an outer edge point of the track; and searching for the point between the valley bottom points of the left and right ruts, wherein the point closest to the non-deformation shaft is the common inner edge point of the left and right ruts.
(5) Automatic detection of three-dimensional size of rut
In this embodiment, on the basis that the rut edge points and the valley bottom points have been determined, the rut three-dimensional size measurement is performed on the preprocessed and gradient-corrected cross-section line data, and the main measured rut sizes include: left and right track width, left and right track maximum depth, left and right track cross section recessed area, left and right track inner and outer wall slope, and left and right track valley bottom spacing. In the embodiment, different algorithms are respectively designed for three-dimensional rut size detection aiming at two conditions of the rut road surface, namely the situation of congestion and central depression.
(6) Lane edge line identification and lane center line positioning
The positioning of the edge lines of the positioning lane in the image is identified, so that the influence of the transverse deviation of the detection vehicle in the running process can be eliminated, and the rut is positioned. Firstly, filtering an image through wiener filtering, and carrying out binarization processing on a two-dimensional pavement image by utilizing an Otsu method; and recognizing lane edge lines on the binarized two-dimensional pavement image, eliminating the influence of paint removal and internal defect of the lane edge lines through expansion corrosion operation, and extracting the edges of the lane lines by using a Canny edge recognition algorithm. The present embodiment performs determination of the lane center line position in three cases according to the number of captured lane edge lines on a single image. Determining the lane center line under the condition that the edge lines of the lanes at the two ends are captured: firstly, extracting the position information of the center lines of the edge lines of the left lane and the right lane, measuring the lane width, and then taking the connecting line of the middle points of the two center lines as the lane center line. Determining the lane center line under the condition that the edge lines of the lane at one end are captured: the method comprises the steps of firstly extracting the position information of the center line of the captured lane edge line, and then taking the connecting line of points which are far away from the center 1/2 of the edge line at one end as the center line of the lane by taking the edge line at one end as a reference according to the width of the lane. Determining the lane center line under the condition that the edge lines of the lanes at the two ends are not captured: firstly, extracting the position information of the lane central line at the tail end of the image in the front image and the rear image; the lane center line position of the image without capturing the lane edge line is then defined according to the position of the front and rear image lane center lines.
(7) Left and right rut positioning and rut offset measurement
Determining the position information of the left and right ruts in the lane (the transverse position of the rut valley bottom point relative to the lane center line) according to the horizontal coordinate positions of the left and right rut valley bottom points and the lane center line on the image and the horizontal cross section line; and (3) measuring the deviation of the ruts, namely firstly defining the center points of the valley bottom points of the left rut and the right rut to represent the center positions of the left rut and the right rut, measuring the distance between the center of the left rut and the center line of the lane, and defining the distance as the deviation of the ruts.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (7)

1. A method for automatically detecting and positioning three-dimensional dimensions of a track on an asphalt pavement by fusing multivariate data is characterized by comprising the following steps of: the method comprises the following steps:
step S1: preprocessing the acquired original road surface transverse cutting line data;
step S2: carrying out gradient correction on the preprocessed cross sectional line data;
step S3: automatically extracting edge points and valley bottom points of left and right ruts of the road surface by adopting a traversal algorithm;
step S4: measuring rut dimensions, comprising: width, maximum depth, inner wall slope, depressed area, and left and right rut spacing;
step S5: recognizing lane edge lines through a two-dimensional road image to determine the position of a lane center line;
step S6: and positioning the left and right tracks according to the positions of the lane edge lines and the lane center points, and measuring the offset of the left and right tracks relative to the lane center line.
2. The method for automatically detecting and positioning the three-dimensional dimensions of the tracks on the asphalt pavement by fusing the multivariate data as recited in claim 1, wherein the method comprises the following steps: the preprocessing of the original road surface cross-sectional line data described in step S1 specifically includes the steps of:
step S11: extracting a splicing seam area section of the data center part of the original pavement transverse section line through a formula (1), and calculating the height difference between adjacent points of the splicing seam area section through a formula (2);
M=ys-ys-1(2)
in the formula, S represents a splicing seam area of a cross-sectional line data center; f (x)s,ys) Representing road surface cross-sectional line data; n represents the total pixel point of the single road surface transverse cutting line data; m represents the height difference between adjacent points of the splicing seam area, and the unit is mm;
step S12: searching and positioning jumping points of the splicing seam area through a formula (3); determining the transverse section line displacement for eliminating the splicing seam through a formula (4);
Figure FDA0002210193420000021
mk=yj-yj-1(4)
in the formula, Jk(xj,yj) A jump point representing a splice seam region;
l represents the crossline data collected by the left camera, and l is 1,2, …, n/2; r represents the crossline data collected by the right camera, r is (n/2) +1, (n/2) +2, …, n; m iskThe displacement of the transverse section line for eliminating the splicing seam is shown, and the unit is mm; k represents the number of jumping points of the splicing seam area;
step S13: moving up or down all points m displacement amounts on the horizontal cross-section line data after jumping points so as to eliminate splicing seams;
step S14: modeling by equation (5), calculating x using equation (6)iAnd xjThe distance between them;
Figure FDA0002210193420000022
dij=|xi-xj| (6)
wherein β represents a parameter of the transverse cross-sectional line x; ε represents the independently distributed random error; dijDenotes xiAnd xjThe distance between them; wherein i is 1,2, …, n; j is 1,2, …, n; x is the number ofi、xjRespectively are the horizontal coordinates of two points on the horizontal section line;
step S15: determining the size of a moving window for cross-section line smoothing processing through a formula (7), calculating weight values of all points in the moving window by using a formula (8), wherein the weights meet the requirements in a formula (9), and determining filtering parameters through formulas (10) and (11);
Figure FDA0002210193420000024
Figure FDA0002210193420000025
Figure FDA0002210193420000031
wherein r represents the width of the moving window; h isiDenotes dijThe r-th value of; w (x) represents a weight coefficient; k represents the number of points of the cross-sectional line in the moving window;
Figure FDA0002210193420000033
representing the filter parameters.
3. The method for automatically detecting and positioning the three-dimensional dimensions of the tracks on the asphalt pavement by fusing the multivariate data as recited in claim 1, wherein the method comprises the following steps: the step S2 of performing gradient correction on the preprocessed crosscut line data specifically includes the following steps:
step S21: based on the characteristics that the number of times of rolling by tires on the edge of a lane is small and the deformation probability is small, taking 10cm sections at two ends of the cross section line data of the road surface, and fitting to establish a non-deformation horizontal axis of the road surface; step S22: extracting an inclination angle acquired by an inertial navigation system as original data of road surface transverse gradient measurement; calculating the inclination angle of the detection vehicle relative to the road surface by combining the road surface transverse section line data measured by the three-dimensional line scanning laser system and the formula (12); finally, acquiring the real transverse gradient of the road surface by using a formula (13);
Sc=tan(θ)+tan(γ) (13)
in the formula, gamma represents the included angle between the detection vehicle and the road surface, and the unit is degree; theta represents an inclination angle acquired by the inertial navigation system, and the unit is degree; l represents the length of the cross section line data of the road surface, and the unit is mm; h isLThe average height value of the left 10cm section transverse section line data is shown, and the unit is mm; h isRThe average height value of the data of the transverse section line of the segment with the length of 10cm at the right end is expressed in mm; scThe unit of the transverse gradient of the road surface is m/m;
step S23: the translation transverse section line data comprises a road surface non-deformation shaft until the left end point coincides with the original point, and the transverse section line after translation is rotated around the original point until the gradient of the road surface non-deformation shaft is consistent with the road surface transverse gradient; acquiring gradient-corrected cross-sectional line data by equations (14) to (16);
x′=x·cosα+y·sinα (14)
y′=x·sinα+y·cosα (15)
α=γ-arctan(Sc) (16)
wherein x' represents the abscissa of the cross sectional line data after slope correction, and the unit is mm; y' represents the ordinate of the cross-sectional line data after the slope correction, and the unit is mm; α represents a rotation angle in degrees.
4. The method for automatically detecting and positioning the three-dimensional dimensions of the tracks on the asphalt pavement by fusing the multivariate data as recited in claim 1, wherein the method comprises the following steps: the step S3 specifically includes the following steps:
step S31: determining a bottom point of a rut valley: firstly, marking the middle point of a transverse section line, then traversing each point on the transverse section line data from the middle point to the left and the right respectively, and finally respectively setting two points with the maximum vertical distance from the left end to the right end to a non-deformation shaft as valley bottom points of left and the right ruts;
step S32: determination of the rut edge points: marking all intersection points of a transverse cutting line and a non-deformation shaft, traversing and searching from a valley bottom point of a left rut and a valley bottom point of a right rut to two ends respectively, and defining the intersection point of the left end and the right end, which is closest to the valley bottom point, as a rut edge point; for a road surface with a central depression, the inner edge point of a rut is defined as the point between two rut valley points and closest to the non-deformed axis.
5. The method for automatically detecting and positioning the three-dimensional dimensions of the tracks on the asphalt pavement by fusing the multivariate data as recited in claim 1, wherein the method comprises the following steps: the specific content of step S4 is:
measuring the three-dimensional size of the rut comprises calculating the rut width according to the formula (17); calculating the maximum depth of the rut according to the formula (18); calculating the rut depression area according to formula (19); calculating the distance between the left track and the right track according to a formula (20); the slope of the inner wall of the track is the slope value of a fitted straight line of a cross-section line segment between the edge point of the track and the valley bottom point;
Figure FDA0002210193420000041
Figure FDA0002210193420000051
Figure FDA0002210193420000053
of formula (II) to (III)'jThe height value of the jth point on the horizontal section line data is expressed in mm; w represents the rut width in mm; x is the number ofOThe abscissa position of the outer edge point of the rut is represented in mm; x is the number ofIThe abscissa position of the edge point in the rut is represented in mm; d represents the maximum depth of the rut and is in mm; x is the number ofVThe horizontal coordinate position of the bottom point of the rut valley is expressed in mm; x is the number ofV1The horizontal coordinate of the bottom point of the inner track valley is expressed in mm; x is the number ofV2The horizontal coordinate of the valley bottom point of the outer vehicle is expressed, and the unit is mm; y isVThe height value of the bottom point of the rut valley on the cross section line data is expressed in mm; a represents the rut depression area in mm2(ii) a S represents the left and right rut spacing in mm.
6. The method for automatically detecting and positioning the three-dimensional dimensions of the tracks on the asphalt pavement by fusing the multivariate data as recited in claim 1, wherein the method comprises the following steps: in step S5, the step of determining the position of the lane center line by recognizing the lane edge line through the two-dimensional road image, that is, the specific content of the lane edge line automatic recognition positioning algorithm, includes the following steps:
step S51: carrying out binarization processing on the collected 2D road surface image: firstly, filtering an image through wiener filtering; then, carrying out binarization processing on the image by utilizing the Otsu method, and determining a binarization threshold value of the image according to formulas (21) to (23);
Figure FDA0002210193420000054
Figure FDA0002210193420000061
Figure FDA0002210193420000062
in the formula, T represents an image binarization threshold value, and the value range is 0 to (L-1); l represents the maximum gray value in a single image; piExpressing the percentage of pixel grids with a gray value of i to the whole image,%; w is a0Represents the percentage of background area in the image,%; w is a1Represents the percentage of foreground area in the image,%;
step S52: lane edge line edge recognition: eliminating the influence of edge line paint removal and internal defect on the binary image obtained in the step S51 by using an expansion corrosion operation; then extracting lane edge lines by a Canny edge recognition algorithm;
if the edge lines of the lanes at the two ends are captured, executing step S53 to determine the center line of the lane; if the edge line of the lane at one end is captured, executing step S54 to determine the center line of the lane; if the edge lines of the two end lanes are not captured, executing step S55 to determine the center line of the lane;
the lane center line is a connection line of the center points of the cross section of the lane, the position of the lane center line is defined through the lane edge lines on the two sides, namely the connection line of the center points which are equidistant from the lane edge lines on the two sides is the lane center line;
step S53: determining the lane center line under the condition that the edge lines of the lanes at the two ends are captured: firstly, extracting the position information of the center lines of the edge lines of the left lane and the right lane, measuring the lane width, and then taking the connecting line of the middle points of the two center lines as the center line of the lane;
step S54: determining the lane center line under the condition that the edge lines of the lane at one end are captured: firstly, extracting the position information of the center line of the captured lane edge line, then taking one end edge line as a reference according to the lane width, and taking a connecting line of points which are away from the center 1/2 of the edge line at one end as the lane center line;
step S55: determining the lane center line under the condition that the edge lines of the lanes at the two ends are not captured: firstly, extracting the position information of the lane central line at the tail end of the image in the front image and the rear image; the lane center line position of the image without capturing the lane edge line is then defined according to the position of the front and rear image lane center lines.
7. The method for automatically detecting and positioning the three-dimensional dimensions of the tracks on the asphalt pavement by fusing the multivariate data as recited in claim 6, wherein the method comprises the following steps: the specific content of step S6 is: determining the position information of the left and right ruts in the lane by using a formula (24) according to the positions of valley bottom points of the left and right ruts and the center line of the lane; the rut offset is the distance between the midpoint of the bottom point of the left and right rut valley and the center line of the lane, and is calculated by a formula (25),
Figure FDA0002210193420000072
in the formula, PviThe position of the bottom point of the rut valley relative to the central line of the lane is shown in mm; x is the number ofLCThe center line of the lane is positioned at the position of the transverse coordinate of the image, and the unit is mm; scThe unit of the transverse gradient of the road surface is m/m; DErutsRepresents the transverse offset of the rut, and the unit is mm; x is the number ofviIndicating the coordinate position of the rut valley bottom point on the crossline data in mm.
CN201910897749.3A 2019-09-21 2019-09-21 Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data Active CN110674732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910897749.3A CN110674732B (en) 2019-09-21 2019-09-21 Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910897749.3A CN110674732B (en) 2019-09-21 2019-09-21 Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data

Publications (2)

Publication Number Publication Date
CN110674732A true CN110674732A (en) 2020-01-10
CN110674732B CN110674732B (en) 2022-06-07

Family

ID=69078645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910897749.3A Active CN110674732B (en) 2019-09-21 2019-09-21 Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data

Country Status (1)

Country Link
CN (1) CN110674732B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112361977A (en) * 2020-11-10 2021-02-12 成都新西旺自动化科技有限公司 Linear distance measuring method based on weight distribution
CN114896767A (en) * 2022-04-21 2022-08-12 中冶(贵州)建设投资发展有限公司 Asphalt pavement track depth prediction method based on refined axle load effect
CN116091714A (en) * 2022-11-18 2023-05-09 东南大学 Automatic generation method of three-dimensional shape of rut of asphalt pavement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101089548A (en) * 2007-07-06 2007-12-19 哈尔滨工业大学 3D information detection device and method for pavement treadway
WO2013180273A1 (en) * 2012-06-01 2013-12-05 株式会社日本自動車部品総合研究所 Device and method for detecting traffic lane boundary
CN104239628A (en) * 2014-09-10 2014-12-24 长安大学 Simulation analysis method for rut depth error caused by transverse offset of detection vehicle
CN108664715A (en) * 2018-04-26 2018-10-16 长安大学 A kind of surface gathered water track triple assessment and traffic safety analysis method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101089548A (en) * 2007-07-06 2007-12-19 哈尔滨工业大学 3D information detection device and method for pavement treadway
WO2013180273A1 (en) * 2012-06-01 2013-12-05 株式会社日本自動車部品総合研究所 Device and method for detecting traffic lane boundary
CN104239628A (en) * 2014-09-10 2014-12-24 长安大学 Simulation analysis method for rut depth error caused by transverse offset of detection vehicle
CN108664715A (en) * 2018-04-26 2018-10-16 长安大学 A kind of surface gathered water track triple assessment and traffic safety analysis method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李中轶等: "基于路面3D激光点云数据的车辙自动测量与横向定位", 《传感技术学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112361977A (en) * 2020-11-10 2021-02-12 成都新西旺自动化科技有限公司 Linear distance measuring method based on weight distribution
CN112361977B (en) * 2020-11-10 2021-05-28 成都新西旺自动化科技有限公司 Linear distance measuring method based on weight distribution
CN114896767A (en) * 2022-04-21 2022-08-12 中冶(贵州)建设投资发展有限公司 Asphalt pavement track depth prediction method based on refined axle load effect
CN114896767B (en) * 2022-04-21 2024-04-19 中冶(贵州)建设投资发展有限公司 Asphalt pavement rut depth prediction method based on refined axle load effect
CN116091714A (en) * 2022-11-18 2023-05-09 东南大学 Automatic generation method of three-dimensional shape of rut of asphalt pavement
CN116091714B (en) * 2022-11-18 2023-10-31 东南大学 Automatic generation method of three-dimensional shape of rut of asphalt pavement

Also Published As

Publication number Publication date
CN110674732B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN110674732B (en) Asphalt pavement rut three-dimensional size automatic detection and positioning method integrating multivariate data
CN110647850A (en) Automatic lane deviation measuring method based on inverse perspective principle
JP4363295B2 (en) Plane estimation method using stereo images
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN103196418A (en) Measuring method of vehicle distance at curves
CN108596165A (en) Road traffic marking detection method based on unmanned plane low latitude Aerial Images and system
CN104700414A (en) Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN110197173B (en) Road edge detection method based on binocular vision
CN101629820A (en) Road-edge detection
CN104574393A (en) Three-dimensional pavement crack image generation system and method
CN106407924A (en) Binocular road identifying and detecting method based on pavement characteristics
CN109060820B (en) Tunnel disease detection method and tunnel disease detection device based on laser detection
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN112070756B (en) Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography
CN113554646B (en) Intelligent urban road pavement detection method and system based on computer vision
CN111694011A (en) Road edge detection method based on data fusion of camera and three-dimensional laser radar
Zhang et al. Rapid inspection of pavement markings using mobile LiDAR point clouds
CN106600592A (en) Track long chord measurement method based on the splicing of continuous frame images
Botha et al. Real time rut profile measurement in varying terrain types using digital image correlation
CN103810676A (en) Method for monitoring running speed of steel pipe
Ryu et al. Feature-based pothole detection in two-dimensional images
CN211498390U (en) Vehicle-mounted ice surface identification and tracking system
CN110374045B (en) Intelligent deicing method
EP3529742A1 (en) Sidewalk edge finder system and method
JP7191671B2 (en) CALIBRATION DEVICE, CALIBRATION METHOD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Luo Wenting

Inventor after: Liu Lexuan

Inventor after: Li Lin

Inventor before: Luo Wenting

Inventor before: Li Lin

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant