CN109919856B - Asphalt pavement structure depth detection method based on binocular vision - Google Patents
Asphalt pavement structure depth detection method based on binocular vision Download PDFInfo
- Publication number
- CN109919856B CN109919856B CN201910053244.9A CN201910053244A CN109919856B CN 109919856 B CN109919856 B CN 109919856B CN 201910053244 A CN201910053244 A CN 201910053244A CN 109919856 B CN109919856 B CN 109919856B
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- pixel
- gray image
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000010426 asphalt Substances 0.000 title claims abstract description 49
- 238000001514 detection method Methods 0.000 title claims abstract description 34
- 238000012937 correction Methods 0.000 claims abstract description 47
- 238000010276 construction Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 58
- 238000000034 method Methods 0.000 claims description 43
- 239000013598 vector Substances 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 9
- 239000002131 composite material Substances 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 claims description 4
- 238000012986 modification Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 239000004576 sand Substances 0.000 description 7
- 230000007547 defect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000011900 installation process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
The invention provides a binocular vision-based asphalt pavement structure depth detection method, which comprises the following steps: 100. acquiring internal parameters and external parameters of a left camera and a right camera; 200. respectively acquiring a left color image and a right color image of the asphalt pavement by using a left camera and a right camera; 300. processing the left color image and the right color image into a left gray image and a right gray image respectively; 400. respectively carrying out distortion correction on the left gray image and the right gray image to obtain a first corrected left gray image and a first corrected right gray image; 500. respectively carrying out three-dimensional correction on the first corrected left gray image and the first corrected right gray image to obtain a second corrected left gray image and a second corrected right gray image; 600. performing stereo matching on the second corrected left grayscale image and the second corrected right grayscale image; 700. eliminating stereo matching error values; 800. correcting the error of the shooting angle; 900. and calculating the construction depth of the asphalt pavement. The invention has the advantages of high speed, high efficiency, difficult interference, economic price, more accurate detection result and the like.
Description
Technical Field
The invention relates to a detection technology of asphalt pavement structure depth in road engineering construction, in particular to a binocular vision-based asphalt pavement structure depth detection method.
Background
The skid resistance of the asphalt pavement has obvious influence on driving safety, and the construction depth is an important index for evaluating the skid resistance of the asphalt pavement. The structural depth of the asphalt pavement refers to the average depth of the uneven open pores on the surface of the pavement, and reflects the roughness of the pavement. The skid resistance of the asphalt pavement can be reduced due to the fact that the road surface structure depth is too small, the phenomenon that an automobile slips can be caused, the braking distance of the automobile can be increased, and driving safety is seriously affected.
At present, the detection method of the asphalt pavement structure depth mainly comprises a sand paving method, a laser structure depth instrument method and a digital image method. The sand laying method has simple principle and convenient measurement, but is extremely time-consuming; although the laser structure depth instrument method has higher precision, special equipment is needed, and the price is high; the digital image method is fast and efficient in detection, but is easily interfered by external illumination and the self color of the road surface. Obviously, the existing asphalt pavement structure depth detection method has the problems of long time consumption, high price, high possibility of interference and the like.
Therefore, the research on the asphalt structure depth detection method which is rapid, efficient, not easy to interfere and economical in price is necessary.
Disclosure of Invention
In order to overcome the problems, the invention provides a rapid and efficient asphalt pavement structure depth detection method which is not easy to interfere and is economical in price.
The technical scheme of the invention is as follows: the binocular vision-based asphalt pavement structure depth detection method comprises the following steps:
100. acquiring internal parameters and external parameters of a left camera and a right camera;
200. respectively acquiring a left color image and a right color image of the asphalt pavement by using a left camera and a right camera;
300. processing the left color image and the right color image into a left gray image and a right gray image respectively;
400. respectively carrying out distortion correction on the left gray image and the right gray image according to internal parameters of a left camera and a right camera to obtain a first corrected left gray image and a first corrected right gray image;
500. respectively carrying out three-dimensional correction on the first corrected left gray image and the first corrected right gray image according to internal parameters and external parameters of a left camera and a right camera to obtain a second corrected left gray image and a second corrected right gray image;
600. performing stereo matching on the second correction left gray image and the second correction right gray image, identifying corresponding pixel points on the second correction left gray image and the second correction right gray image, calculating a parallax value d, calculating a height value of each pixel point on the image from a camera plane under a camera coordinate system according to the parallax value d, generating a model matrix M containing pixel coordinates of each pixel point on the image and height value information corresponding to the pixel coordinates, and recovering a three-dimensional model of the road surface;
700. setting a threshold value for the difference quotient of the height values of two adjacent pixel points in the model matrix, determining the position of a stereo matching error value, and correcting the stereo matching error value by using a median filter window to eliminate the stereo matching error value;
800. carrying out plane fitting on the model matrix M, subtracting the model matrix M from a fitting plane, and correcting a shooting angle error caused by the fact that an optical axis of a camera is not perpendicular to a road surface when an image is acquired;
900. and calculating the construction depth of the asphalt pavement.
As a modification of the present invention, in step 300, the method further includes the following steps: 301. the left color image and the right color image are respectively calculated by three channels of red (R), green (G) and blue (B) according to the following formula and then are converted into a left single-channel gray image and a right single-channel gray image;
f(x,y)=R(x,y)×0.299+G(x,y)×0.587+B(x,y)×0.114;
wherein f (x, y) is the gray value of the pixel point, and R (x, y), G (x, y) and B (x, y) are the values of the red, green and blue channels of the pixel point respectively.
As a modification of the present invention, in step 300, the method further includes the following steps:
302. and denoising the left single-channel gray image and the right single-channel gray image by using median filtering to obtain a left gray image and a right gray image.
As a modification of the present invention, in the step 400, the following steps are further included:
401. determining distortion coefficient k according to the following formula according to internal parameters of the camera 1 、k 2 :
Wherein k is 1 、k 2 The distortion coefficient of the camera, (u, v) the pixel coordinate without distortion, (x, y) the continuous pixel coordinate without distortion, (u) 0 、v 0 ) Is the pixel coordinates of the principal point of the camera,the distorted pixel coordinates;
402. using the resulting camera distortion coefficient k 1 、k 2 Distortion correction is performed on the left gray image and the right gray image respectively according to the following formula:
as a modification of the present invention, in the step 500, the following steps are further included:
501. determining the relative position relation between the left camera and the right camera:
502. decomposing the relative rotation matrix into a composite rotation matrix r for each of the left and right images using a Rodrigues transform l 、r r ;
503. Calculating respective rotation matrix R of the left image and the right image lt 、R rt The left image is then processed according to the rotation matrix R lt Rotating, based on the rotation matrix R, the right image rt And rotating to enable polar lines of the two images to be horizontal and poles to be at infinity, and finishing the three-dimensional correction.
As an improvement to the present invention, in the step 600, the following steps are further included:
601. after traversing each pixel point on the image by using a semi-global matching algorithm (SGBM), identifying the same pixel point on the second correction left gray level image and the second correction right gray level image, and calculating a parallax value d of the pixel point;
wherein x is l 、x r Pixel abscissa coordinates, z, of the same pixel point on the second corrected left and right gray images, respectively c Is a scale factor;
602. calculating the height value z of each pixel point from the plane of the camera by using the following formula;
wherein, T x The component of the relative translation vector T in the direction of the horizontal axis is expressed in mm, the unit is a horizontal distance between a left camera and a right camera, f is a camera focal length and is expressed in mm, d is a parallax value and is expressed in mm;
603. and combining the height values of all the pixel points into a model matrix M, and recovering the three-dimensional model of the road surface, wherein the model matrix M is a matrix containing the pixel coordinates of each pixel point on the image and the pixel point height values corresponding to the pixel coordinates.
As a modification of the present invention, in the step 700, the following steps are further included:
701. calculating a first-order difference quotient of the height values corresponding to the pixel points according to the following formula, and determining the positions of the stereo matching error pixel points;
k=(z i+1 -z i )/(x i+1 -x i );
wherein k is the first order difference quotient, x, of the pixel points i ,x i+1 Is the pixel abscissa, z, of the ith and (i + 1) th pixel points i ,z i+1 The height values of the ith and (i + 1) th pixel points are in mm.
702. Defining a point with a first-order difference quotient larger than 1 as a point with a matching error, using a filtering window of 7 multiplied by 7, placing the point with the matching error in the center of the window, arranging the height values of all pixel points in the window from small to large, calculating a median value of the pixel point height values in the window, replacing the value with the median value, and outputting the pixel point height value after replacement.
As a modification of the present invention, in step 800, the following steps are further included:
801. performing plane fitting on the pixel point height value of the model matrix M, and calculating the parameter a of a fitting plane according to the following formula 1 ,a 2 ,a 3 ;
Wherein x is i ,y i Is the pixel coordinate of the ith pixel point, z i Is the height value of the ith pixel point, and n is the total number of the pixel points in the matrix.
As a modification of the present invention, in step 800, the following steps are further included:
802. calculating the height value of each pixel point after correction by using the following formula to finish the correction of the shooting angle error;
h i =z i -a 1 x i -a 2 y i -a 3 ;
wherein z is i 、h i The height values of the ith pixel point before and after the shooting angle is corrected are respectively in mm.
As an improvement of the present invention, in the above step 900, the structure depth of the asphalt pavement is calculated according to the following formulaH p In mm;
wherein h is max Is the maximum value of pixel point height in mm, h i The height value of the ith pixel point is in mm, and M and n are the row number and the column number of the model matrix M.
According to the invention, a left camera and a right camera are adopted, a left color image and a right color image of the asphalt pavement are respectively acquired through the left camera and the right camera, gray processing, distortion correction and stereo correction are sequentially carried out, a three-dimensional model of the pavement is restored, a stereo matching error value is eliminated, a shooting angle error of the cameras is corrected, and finally the construction depth of the asphalt pavement is calculated; the influence of illumination and the self color of the pavement in the detection process is small, the structural depth of the asphalt pavement can be measured, meanwhile, a three-dimensional model of the pavement can be recovered, and the technical condition information of the asphalt pavement can be reflected more visually for reference of detection personnel; the defects that a laser structure depth meter method needs special equipment and is expensive are overcome, and detection can be finished by using a pair of common camera lenses; the method overcomes the defects that the traditional manual sand laying method and the electric sand laying method have low detection speed and are greatly influenced by human subjectivity, and has the advantages of high speed and efficiency, difficulty in interference, economic price, more accurate detection result and the like.
Drawings
FIG. 1 is a block diagram of the process of the present invention.
FIG. 2 is a checkerboard for calibration of the present invention.
Fig. 3 is a schematic plan view of the left and right cameras of the present invention.
Fig. 4 is a left gray image and a right gray image after gray processing in the present invention.
Fig. 5 is a second corrected left grayscale image and a second corrected right grayscale image after distortion correction and stereo correction are completed in the present invention.
Fig. 6 is a road surface model diagram of the detected area recovered after the stereo matching is completed in the present invention.
Fig. 7 is a road surface model diagram of the measured area after eliminating the value of the stereo matching error in the present invention.
Fig. 8 is a road surface model diagram of the measured area after correcting the camera shooting angle error in the invention.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or assembly referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; the two components can be directly connected or indirectly connected through an intermediate medium, and the two components can be communicated with each other. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Referring to fig. 1, fig. 1 shows a flow chart of a binocular vision-based asphalt pavement structure depth detection method, which includes the following steps:
100. acquiring internal parameters and external parameters of a left camera and a right camera;
200. respectively acquiring a left color image and a right color image of the asphalt pavement by using a left camera and a right camera;
300. processing the left color image and the right color image into a left gray image and a right gray image respectively (see fig. 4);
400. respectively carrying out distortion correction on the left gray image and the right gray image according to internal parameters of a left camera and a right camera to obtain a first corrected left gray image and a first corrected right gray image;
500. respectively performing stereo correction on the first corrected left gray image and the first corrected right gray image according to internal parameters and external parameters of the left camera and the right camera to obtain a second corrected left gray image and a second corrected right gray image (please refer to fig. 5);
600. performing stereo matching on the second correction left gray image and the second correction right gray image (please refer to fig. 6), identifying corresponding pixel points on the second correction left gray image and the second correction right gray image, calculating a parallax value d of the pixel points, calculating a height value of each pixel point on the image from a camera plane under a camera coordinate system according to the parallax value d, generating a model matrix M containing pixel coordinates of each pixel point on the image and height value information corresponding to the pixel coordinates, and recovering a three-dimensional model of the road surface;
700. setting a threshold value for a difference quotient value of the height values of two adjacent pixel points in the model matrix, determining the position of a stereo matching error point, and correcting the stereo matching error value by using a median filtering window to eliminate the stereo matching error value (see fig. 7);
800. performing plane fitting on the model matrix M, subtracting the fitting plane from the model matrix M, and correcting a shooting angle error caused by the fact that the optical axis of the camera is not perpendicular to the road surface when an image is acquired (see fig. 8);
900. and calculating the construction depth of the asphalt pavement.
In the step 100 of the method, a left camera and a right camera which have the same specification, have imaging planes which are parallel, coplanar and aligned in a row, and are separated by a certain distance from each other are selected as binocular cameras, a world coordinate system is established by taking a measuring platform as an origin, the binocular cameras are calibrated by adopting a Zhang friend calibration method, and internal parameters and external parameters of the two cameras are solved. It should be noted that the internal parameters of the camera are only related to the specifications of the camera, and are uniquely determined after the camera leaves the factory, and the external parameters of the camera are only related to the left and right sidesThe relative positional relationship between the cameras. The camera internal reference comprises a camera focal length f and a scale factor z c Main point position u of camera 0 、v 0 Reflecting the internal structural condition of the camera. The external parameters of the camera comprise a rotation matrix R of the camera relative to the checkerboard and a translation vector T of the camera relative to the checkerboard, and the relative position relation between the cameras is reflected.
Further, calibrating the camera, adopting a Zhang Zhengyou calibration method, shooting a group of checkerboard images by the camera, wherein the checkerboard images are as shown in FIG. 2, then identifying the checkerboard angular points on the digital image by using an image identification technology, establishing the corresponding relation between the checkerboard angular points in the digital image and the checkerboard angular points in the real world, and solving the internal reference and the external reference of the camera. The method comprises the following steps:
101. assuming that the coordinates of the next point in the world coordinate system are P (X, Y, Z) and the pixel coordinates of the corresponding point in the image are P (u, v), the conversion process from the world coordinates to the pixel coordinates is performed according to the following formula:
a x =z c f;
a y =z c f;
where K is the reference matrix of the camera, u 0 、v 0 Pixel coordinates of the principal point of the camera, a x 、a y For camera focal length parameters, R is the 3 × 3 rotation matrix of the camera relative to the checkerboard, T is the 3 × 1 translation vector of the camera relative to the checkerboard, z c Is the scale factor and f is the camera focal length.
102. Assuming that the camera coordinate system and the world coordinate system coincide with each other, the world coordinate system is located on the plane of Z =0, and Z =0 can be made, and the above equation is converted into:
H=K[r 1 r 2 T];
in the formula, H is 3 Homography matrix of x 3, r 1 ,r 2 The first and second columns of the camera rotation matrix R, respectively.
The homography matrix contains all camera internal parameters and external parameters, and the homography matrix H is written into a form of three column vectors [ H 1 h 2 h 3 ]Solving a homography matrix according to the following formula by using constraint conditions in coordinate conversion;
103. shooting a group of checkerboard images by using a binocular camera, identifying angular points of the checkerboard by using an image identification technology, calculating a homography matrix by using angular points P (u, v) under a pixel coordinate system and angular points P (X, Y, Z) under a world coordinate system as known values, and solving all internal parameters and external parameters of the camera.
In the above step 200 of the method, as shown in fig. 3, since the specifications of the left and right cameras 1 are the same, the imaging planes are parallel and coplanar and aligned in a row, and the cameras 1 spaced apart from each other at a distance from each other serve as a set of binocular cameras. That is, the left and right cameras 1 in the present invention form a set of binocular cameras, the binocular cameras are vertically installed on the asphalt pavement 2 at a certain height, the left and right cameras 1 are controlled by a computer to photograph simultaneously, the structural depth of the asphalt pavement 2 is detected by using the obtained left and right digital images, and the part where the photographing areas of the left and right cameras 1 coincide with each other is the detected area 3. The binocular camera must select a camera with a fixed focal length, and cannot select a camera lens with an auto-zoom function. The left and right cameras 1 are installed at a predetermined height above the asphalt pavement 2, the optical axes of the cameras 1 are perpendicular to the asphalt pavement 2, and the imaging areas of the left and right cameras 1 overlap each other (see fig. 3).
In the step 300 of the method, the method further includes the following steps:
301. the left color image and the right color image are respectively calculated by three channels of red (R), green (G) and blue (B) according to the following formula and then are converted into a left single-channel gray image and a right single-channel gray image;
f(x,y)=R(x,y)×0.299+G(x,y)×0.587+B(x,y)×0.114;
wherein f (x, y) is the gray value of the pixel point, and R (x, y), G (x, y) and B (x, y) are the values of the red, green and blue channels of the pixel point respectively.
302. And denoising the left single-channel gray image and the right single-channel gray image by using median filtering to obtain a left gray image and a right gray image (please refer to fig. 4).
The median filtering denoising is to slide on an image by using a 3 x 3 square two-dimensional sliding template, place a gray value to be processed in the middle of a window, arrange all gray values in the window from small to large, calculate a median of the gray values in the window, judge that the gray value is abnormal when the gray value to be processed is equal to the maximum value or the minimum value of the gray value, replace the gray value to be processed with the median of the gray value, and output the replaced gray value; otherwise, the gray value is determined as a normal value, and the original gray value is output.
In the step 400 of the method, the method further includes the following steps:
401. determining distortion coefficient k according to the following formula according to internal parameters of the camera 1 、k 2 :
Wherein k is 1 、k 2 The distortion coefficient of the camera is (u, v) is undistorted pixel coordinate, (x, y) is undistorted continuous pixel coordinate, (u, y) is 0 、v 0 ) Is the pixel coordinates of the principal point of the camera,the distorted pixel coordinates. The distortion correction refers to correcting barrel distortion or pincushion distortion which may occur in an image, and the basis of the correction is an internal reference of the camera.
402. Using the resulting camera distortion coefficient k 1 、k 2 Distortion correction is performed on the left gray image and the right gray image respectively according to the following formula:
in the step 500 of the method, stereo correction refers to correcting the relative positions of the left camera and the right camera, the left camera and the right camera have errors in the installation process, and two imaging planes cannot be completely parallel and coplanar and aligned, so that two images need to be stereo corrected. And correcting the image by using the camera external parameter obtained by calibration and using a Bouguet algorithm. Also comprises the following steps:
501. determining the relative position relationship between the left camera and the right camera, wherein the formula is as follows:
R=R r R l T;
T=T r -RT l ;
wherein R is a 3 × 3 relative rotation matrix between the left and right cameras, T is a 3 × 1 relative translation vector between the left and right cameras, R l 、R f A 3 × 3 rotation matrix, T, of the left and right cameras, respectively, with respect to the checkerboard l 、T r Are respectively a left part and a right partA 3 x 1 translation vector of the table camera relative to the checkerboard;
502. decomposing the relative rotation matrix into a composite rotation matrix r for each of the left and right images using a Rodrigues transform l 、r r ;
503. Calculating respective rotation matrix R of the left image and the right image lt 、R rt The left image is based on the rotation matrix R lt Rotate, the right image according to the rotation matrix R rt And rotating to enable polar lines of the two images to be horizontal and poles to be at infinity, and finishing the three-dimensional correction. The formula is as follows:
R lt =R rect r l ;
R rt =R rect r r ;
R rect =[e 1 e 2 e 3 ];
e 3 =e 1 ×e 2 ;
in the formula, R lt 、R rt A 3 x 3 rotation matrix for each of the left and right images, R a 3 x 3 relative rotation matrix between the left and right cameras, T a 3 x 1 relative translation vector between the left and right cameras,
r l 、r r composite rotation matrices, T, for the left and right images, respectively x 、T y The components of the relative translation vector T in the directions of the horizontal and vertical axes, respectively. Please refer to fig. 5 for the corrected image.
In the step 600 of the method, the method further includes the following steps:
601. after traversing each pixel point on the image by using a semi-global matching algorithm (SGBM), identifying the same pixel point on the second correction left gray level image and the second correction right gray level image, and calculating a parallax value d of the pixel point, wherein the unit is mm;
wherein x is l 、x r Pixel abscissa coordinates, z, of the same pixel point on the second corrected left and right gray images, respectively c Is a scale factor;
602. calculating the height value z of each pixel point from the plane of the camera by using the following formula;
wherein, T x The component of the relative translation vector T in the direction of the horizontal axis is expressed in mm, the unit is the distance between the left camera and the right camera, f is the focal length of the cameras and is expressed in mm, d is a parallax value and is expressed in mm; it can be seen that the larger the parallax value d is, the closer the pixel point is to the camera, and the smaller the parallax value d is, the farther the pixel point is from the camera.
603. And (3) combining the height values of all the pixel points into a model matrix M, and recovering the three-dimensional model of the road surface (see fig. 6). The model matrix M is a matrix including the pixel coordinates of each pixel point on the image and the pixel point height values corresponding to the pixel coordinates.
In the step 700 of the method, the method further includes the following steps:
701. calculating a first-order difference quotient of the height values corresponding to the pixel points according to the following formula, and determining the positions of the stereo matching error pixel points;
k=(z i+1 -z i )/(x i+1 -x i );
wherein k is the first order difference quotient, x, of the pixel points i ,x i+1 Is the pixel abscissa, z, of the ith and (i + 1) th pixel points i ,z i+1 The height values of the ith and (i + 1) th pixel points are in mm.
702. Defining a point with a first-order difference quotient larger than 1 as a point with a matching error, utilizing a filtering window of 7 multiplied by 7, placing the point with the matching error at the center of the window, arranging the height values of all pixel points in the window from small to large, calculating a median value of the height values of the pixel points in the window, replacing the value with the median value for the matching error, and outputting the height value of the pixel points after replacement to obtain a road surface model map with the value of the stereo matching error eliminated, as shown in fig. 7.
In the step 800 of the method, the method further includes the following steps:
801. performing plane fitting on the pixel point height value of the model matrix M, and calculating the parameter a of a fitting plane according to the following formula 1 ,a 2 ,a 3 ;
Wherein x is i ,y i Is the pixel coordinate of the ith pixel point, z i The height value of the ith pixel point is in mm, and n is the total number of the pixel points in the matrix.
802. Calculating the height value of each pixel point after correction by using the following formula to finish the correction of the shooting angle error;
h i =z i -a 1 x i -a 2 y i -a 3 ;
wherein z is i 、h i The corrected road surface model map is obtained by the height values of the ith pixel point in mm before and after the shooting angle is corrected, respectively, as shown in fig. 8.
In the above step 900 of the method, the construction depth H of the asphalt pavement is calculated according to the following formula p In mm;
wherein h is max Is the maximum value of pixel point height in mm, h i Is a firstThe height values of the i pixel points are in mm, and M and n are the row number and the column number of the model matrix M.
In order to verify the effectiveness of the invention, the invention is utilized to carry out the detection of the structural depth of the asphalt pavement and carry out analysis and calculation on the collected image information of 30 measuring points, and the calculation result is compared with the detection result of the manual sand laying method, and the result is shown in the table 1.
As can be seen from Table 1, the maximum relative error of the test results of 30 measuring points is-8.45%, the average relative error is 3.04%, and the correlation coefficient is 0.933.
According to the invention, a left camera and a right camera are adopted, a left color image and a right color image of the asphalt pavement are respectively acquired by the left camera and the right camera, gray processing, distortion correction and stereo correction are sequentially carried out, a three-dimensional model of the pavement is restored, a stereo matching error value is eliminated, a shooting angle error of the cameras is corrected, and finally the construction depth of the asphalt pavement is calculated; the influence of illumination and the self color of the pavement in the detection process is small, the structural depth of the asphalt pavement can be measured, meanwhile, a three-dimensional model of the pavement can be recovered, and the technical condition information of the asphalt pavement can be reflected more visually for reference of detection personnel; the defects that a laser structure depth meter method needs special equipment and is expensive are overcome, and detection can be finished by using a pair of common camera lenses; the method overcomes the defects that the traditional manual sand laying method and the electric sand laying method have low detection speed and are greatly influenced by human subjectivity, and has the advantages of high speed and efficiency, difficulty in interference, economic price, more accurate detection result and the like.
It should be noted that, the detailed explanation of the above embodiments is only intended to explain the present invention so as to better explain the present invention, but the descriptions should not be construed as limiting the present invention for any reason, and especially, the features described in the different embodiments can be arbitrarily combined with each other to constitute other embodiments, and the features should be understood as being applicable to any one embodiment and not limited to only the described embodiments except for the explicit contrary descriptions.
Claims (10)
1. A binocular vision-based asphalt pavement structure depth detection method is characterized by comprising the following steps:
100. acquiring internal parameters and external parameters of a left camera and a right camera;
200. respectively acquiring a left color image and a right color image of the asphalt pavement by using a left camera and a right camera;
300. processing the left color image and the right color image into a left gray image and a right gray image respectively;
400. respectively carrying out distortion correction on the left gray image and the right gray image according to internal parameters of the left camera and the right camera to obtain a first corrected left gray image and a first corrected right gray image;
500. respectively carrying out three-dimensional correction on the first corrected left gray image and the first corrected right gray image according to internal parameters and external parameters of a left camera and a right camera to obtain a second corrected left gray image and a second corrected right gray image;
600. performing stereo matching on the second correction left gray image and the second correction right gray image, identifying corresponding pixel points on the second correction left gray image and the second correction right gray image, calculating a parallax value d, calculating a height value of each pixel point on the image from a camera plane under a camera coordinate system according to the parallax value d, generating a model matrix M containing pixel coordinates of each pixel point on the image and height value information corresponding to the pixel coordinates, and recovering a three-dimensional model of the road surface;
700. setting a threshold value for a difference quotient value of the height values between two adjacent pixel points in the model matrix, determining the position of a stereo matching error value, and correcting the stereo matching error value by using a median filtering window to eliminate the stereo matching error value;
800. carrying out plane fitting on the model matrix M, subtracting the model matrix M from a fitting plane, and correcting a shooting angle error caused by incomplete perpendicularity of an optical axis of the camera and a road surface when an image is acquired;
900. and calculating the construction depth of the asphalt pavement.
2. The binocular vision based asphalt pavement structure depth detection method according to claim 1, wherein in the step 300, the method further comprises the steps of:
301. respectively calculating a left color image and a right color image by three channels of red (R), green (G) and blue (B) according to the following formula, and converting the left color image and the right color image into a left single-channel gray image and a right single-channel gray image;
f(x,y)=R(x,y)×0.299+G(x,y)×0.587+B(x,y)×0.114;
wherein, f (x, y) is the gray value of the pixel point, and R (x, y), G (x, y) and B (x, y) are the values of the red, green and blue channels of the pixel point respectively.
3. The binocular vision-based asphalt pavement structure depth detection method according to claim 2, wherein in the step 300, the method further comprises the steps of:
302. and denoising the left single-channel gray image and the right single-channel gray image by using median filtering to obtain a left gray image and a right gray image.
4. The binocular vision-based asphalt pavement structure depth detection method according to claim 1, wherein in the step 400, the method further comprises the steps of:
401. determining distortion coefficient k according to the following formula according to internal parameters of the camera 1 、k 2 :
Wherein k is 1 、k 2 As distortion of cameraCoefficients, (u, v) are undistorted pixel coordinates, (x, y) are undistorted continuous pixel coordinates, (u 0 、v 0 ) Is the pixel coordinates of the principal point of the camera,the distorted pixel coordinates;
402. using the resulting camera distortion coefficient k 1 、k 2 Distortion correction is performed on the left gray image and the right gray image respectively according to the following formula:
5. the binocular vision-based asphalt pavement structure depth detection method according to claim 1, further comprising, in the step 500, the steps of:
501. determining the relative position relationship between the left camera and the right camera:
502. decomposing the relative rotation matrix into a composite rotation matrix r for each of the left and right images using a Rodrigues transform l 、r r ;
503. Calculating respective rotation matrix R of the left image and the right image lt 、R rt The left image is based on the rotation matrix R lt Rotating, based on the rotation matrix R, the right image rt And rotating to enable polar lines of the two images to be horizontal and poles to be at infinity, and finishing the three-dimensional correction.
6. The binocular vision-based asphalt pavement structure depth detection method according to claim 1, further comprising, in the step 600, the steps of:
601. after traversing each pixel point on the image by using a semi-global matching algorithm (SGBM), identifying the same pixel point on the second correction left gray level image and the second correction right gray level image, and calculating a parallax value d of the pixel point;
wherein x is l 、x r Pixel abscissa coordinates, z, of the same pixel point on the second corrected left and right gray images, respectively c Is a scale factor;
602. calculating the height value z of each pixel point from the plane of the camera by using the following formula;
wherein, T x The component of the relative translation vector T in the direction of the horizontal axis is expressed in mm, the horizontal distance between the left camera and the right camera is expressed, f is the focal length of the cameras and is expressed in mm, d is a parallax value and is expressed in mm;
603. and combining the height values of all the pixels into a model matrix M, and recovering the three-dimensional model of the road surface, wherein the model matrix M is a matrix containing the pixel coordinates of each pixel on the image and the corresponding pixel point height value.
7. The binocular vision-based asphalt pavement structure depth detection method according to claim 1, wherein in the step 700, the method further comprises the steps of:
701. calculating a first-order difference quotient of the corresponding height values of the adjacent pixel points according to the following formula, and determining the position of a stereo matching error pixel point;
k=(z i+1 -z i )/(x i+1 -x i );
wherein k is the first order difference quotient, x, of the pixel points i ,x i+1 Is the pixel of the ith and (i + 1) th pixel pointsAbscissa, z i ,z i+1 The height values of the ith and (i + 1) th pixel points are in mm;
702. defining the point with the first-order difference quotient larger than 1 as a point with a matching error, utilizing a 7 multiplied by 7 filtering window, placing the point with the matching error at the center of the window, arranging the height values of all the points in the window from small to large, calculating a median value of the height values in the window, replacing the value with the median value of the matching error, and outputting the height value of the pixel point after replacement.
8. The binocular vision-based asphalt pavement structure depth detection method according to claim 1, further comprising, in the step 800, the steps of:
801. performing plane fitting on the pixel point height value of the model matrix M, and calculating the parameter a of a fitting plane according to the following formula 1 ,a 2 ,a 3 ;
Wherein x is i ,y i Is the pixel coordinate of the ith pixel point, z i The height value of the ith pixel point is obtained, and n is the total number of the pixel points in the matrix.
9. The binocular vision based asphalt pavement structure depth detection method according to claim 8, further comprising, in the step 800, the steps of:
802. calculating the height value of each pixel point after correction by using the following formula to finish the correction of the shooting angle error;
h i =z i -a 1 x i -a 2 y i -a 3 ;
wherein z is i 、h i The height values of the ith pixel point before and after the shooting angle is corrected are respectively in mm.
10. The base of claim 1The method for detecting the structural depth of the asphalt pavement based on the binocular vision is characterized in that, in the step 900, the structural depth H of the asphalt pavement is calculated according to the following formula p In mm;
wherein h is max Is the maximum value of pixel point height value in mm, h i The height value of the ith pixel point is in mm, and M and n are the row number and the column number of the model matrix M.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910053244.9A CN109919856B (en) | 2019-01-21 | 2019-01-21 | Asphalt pavement structure depth detection method based on binocular vision |
CN202310309976.6A CN116342674A (en) | 2019-01-21 | 2019-01-21 | Method for calculating asphalt pavement construction depth by three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910053244.9A CN109919856B (en) | 2019-01-21 | 2019-01-21 | Asphalt pavement structure depth detection method based on binocular vision |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310309976.6A Division CN116342674A (en) | 2019-01-21 | 2019-01-21 | Method for calculating asphalt pavement construction depth by three-dimensional model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109919856A CN109919856A (en) | 2019-06-21 |
CN109919856B true CN109919856B (en) | 2023-02-28 |
Family
ID=66960505
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910053244.9A Active CN109919856B (en) | 2019-01-21 | 2019-01-21 | Asphalt pavement structure depth detection method based on binocular vision |
CN202310309976.6A Pending CN116342674A (en) | 2019-01-21 | 2019-01-21 | Method for calculating asphalt pavement construction depth by three-dimensional model |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310309976.6A Pending CN116342674A (en) | 2019-01-21 | 2019-01-21 | Method for calculating asphalt pavement construction depth by three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN109919856B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091063B (en) * | 2019-11-20 | 2023-12-29 | 北京迈格威科技有限公司 | Living body detection method, device and system |
CN111553878A (en) * | 2020-03-23 | 2020-08-18 | 四川公路工程咨询监理有限公司 | Method for detecting paving uniformity of asphalt pavement mixture based on binocular vision |
CN111862234B (en) * | 2020-07-22 | 2023-10-20 | 中国科学院上海微系统与信息技术研究所 | Binocular camera self-calibration method and system |
CN112819820B (en) * | 2021-02-26 | 2023-06-16 | 大连海事大学 | Road asphalt repairing and detecting method based on machine vision |
CN117649454B (en) * | 2024-01-29 | 2024-05-31 | 北京友友天宇系统技术有限公司 | Binocular camera external parameter automatic correction method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102635056A (en) * | 2012-04-01 | 2012-08-15 | 长安大学 | Measuring method for construction depth of asphalt road surface |
CN104775349A (en) * | 2015-02-15 | 2015-07-15 | 云南省交通规划设计研究院 | Tester and measuring method for structural depth of large-porosity drainage asphalt pavement |
CN105205822A (en) * | 2015-09-21 | 2015-12-30 | 重庆交通大学 | Real-time detecting method for asphalt compact pavement segregation degree |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN106845424A (en) * | 2017-01-24 | 2017-06-13 | 南京大学 | Road surface remnant object detection method based on depth convolutional network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8070628B2 (en) * | 2007-09-18 | 2011-12-06 | Callaway Golf Company | Golf GPS device |
-
2019
- 2019-01-21 CN CN201910053244.9A patent/CN109919856B/en active Active
- 2019-01-21 CN CN202310309976.6A patent/CN116342674A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102635056A (en) * | 2012-04-01 | 2012-08-15 | 长安大学 | Measuring method for construction depth of asphalt road surface |
CN104775349A (en) * | 2015-02-15 | 2015-07-15 | 云南省交通规划设计研究院 | Tester and measuring method for structural depth of large-porosity drainage asphalt pavement |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN105205822A (en) * | 2015-09-21 | 2015-12-30 | 重庆交通大学 | Real-time detecting method for asphalt compact pavement segregation degree |
CN106845424A (en) * | 2017-01-24 | 2017-06-13 | 南京大学 | Road surface remnant object detection method based on depth convolutional network |
Non-Patent Citations (2)
Title |
---|
"Inflight helicopter blade track measurement using computer vision";Akhtar Hanif等;《2014 IEEE REGION 10 SYMPOSIUM》;20141231;56-61 * |
"基于数字图像技术的沥青混凝土构造深度检测研究";何力;《北方交通》;20180630;78-81 * |
Also Published As
Publication number | Publication date |
---|---|
CN116342674A (en) | 2023-06-27 |
CN109919856A (en) | 2019-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919856B (en) | Asphalt pavement structure depth detection method based on binocular vision | |
CN110285793B (en) | Intelligent vehicle track measuring method based on binocular stereo vision system | |
CN109443245B (en) | Multi-line structured light vision measurement method based on homography matrix | |
CN102376089B (en) | Target correction method and system | |
CN106978774B (en) | A kind of road surface pit slot automatic testing method | |
CN108613628B (en) | Overhead transmission line sag measurement method based on binocular vision | |
CN108876749A (en) | A kind of lens distortion calibration method of robust | |
CN107179322A (en) | A kind of bridge bottom crack detection method based on binocular vision | |
CN105839505B (en) | The detection method and detection means of a kind of road surface breakage information of three-dimensional visualization | |
CN112902874B (en) | Image acquisition device and method, image processing method and device and image processing system | |
CN110044301B (en) | Three-dimensional point cloud computing method based on monocular and binocular mixed measurement | |
CN109191560B (en) | Monocular polarization three-dimensional reconstruction method based on scattering information correction | |
CN106709955B (en) | Space coordinate system calibration system and method based on binocular stereo vision | |
CN106225676B (en) | Method for three-dimensional measurement, apparatus and system | |
CN103234475B (en) | Sub-pixel surface morphology detecting method based on laser triangular measuring method | |
CN105989593A (en) | Method and device for measuring speed of specific vehicle in video record | |
CN111121643B (en) | Road width measuring method and system | |
CN102831601A (en) | Three-dimensional matching method based on union similarity measure and self-adaptive support weighting | |
WO2011125937A1 (en) | Calibration data selection device, method of selection, selection program, and three dimensional position measuring device | |
CN110966956A (en) | Binocular vision-based three-dimensional detection device and method | |
CN107490342A (en) | A kind of cell phone appearance detection method based on single binocular vision | |
CN111091076A (en) | Tunnel limit data measuring method based on stereoscopic vision | |
CN115330684A (en) | Underwater structure apparent defect detection method based on binocular vision and line structured light | |
CN103234483B (en) | A kind of detection method of parallelism of camera chip and device | |
CN205711654U (en) | A kind of detection device of the road surface breakage information of three-dimensional visualization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |