CN109919856A - Bituminous pavement construction depth detection method based on binocular vision - Google Patents

Bituminous pavement construction depth detection method based on binocular vision Download PDF

Info

Publication number
CN109919856A
CN109919856A CN201910053244.9A CN201910053244A CN109919856A CN 109919856 A CN109919856 A CN 109919856A CN 201910053244 A CN201910053244 A CN 201910053244A CN 109919856 A CN109919856 A CN 109919856A
Authority
CN
China
Prior art keywords
gray level
pixel
correction
level image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910053244.9A
Other languages
Chinese (zh)
Other versions
CN109919856B (en
Inventor
宋永朝
何力
梁乃兴
杨良浩
祝涛
卢笑
马晨威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jiaotong University
Original Assignee
Chongqing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jiaotong University filed Critical Chongqing Jiaotong University
Priority to CN201910053244.9A priority Critical patent/CN109919856B/en
Priority to CN202310309976.6A priority patent/CN116342674A/en
Publication of CN109919856A publication Critical patent/CN109919856A/en
Application granted granted Critical
Publication of CN109919856B publication Critical patent/CN109919856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • G06T5/70
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The present invention provides a kind of bituminous pavement construction depth detection method based on binocular vision, include the following steps: 100. acquisitions or so, two video cameras internal reference and outer ginseng;200. acquire the left color image and right color image of bituminous pavement using two video cameras in left and right respectively;Left color image and right color image are processed into left gray level image and right gray level image by 300. respectively;400. carry out distortion correction to left gray level image and right gray level image respectively, obtain the left gray level image of the first correction and the first right gray level image of correction;500. carry out three-dimensional correction to the left gray level image of the first correction and the first right gray level image of correction respectively, obtain the left gray level image of the second correction and the second right gray level image of correction;The left gray level image of 600. pair of second correction and the second right gray level image of correction carry out Stereo matching;700. eliminate the value of Stereo matching mistake;800. amendment shooting angle errors;900. calculate the construction depth of bituminous pavement.The present invention have many advantages, such as rapidly and efficiently, be not easy to be disturbed, price economy and testing result it is more accurate.

Description

Bituminous pavement construction depth detection method based on binocular vision
Technical field
The present invention relates to the detection techniques of bituminous pavement construction depth in road project, more particularly, to a kind of base In the bituminous pavement construction depth detection method of binocular vision.
Background technique
The antiskid performance of bituminous pavement influences significantly traffic safety, and construction depth is evaluation Bitumen Pavement Anti-Skid Performance Important indicator.The construction depth of bituminous pavement refers to the mean depth of the rough open pores of road surface, reflects road The degree of roughness in face.The too small antiskid performance that will reduce bituminous pavement of pavement structural depth, not only results in automobile and beats Sliding phenomenon, and will increase the braking distance of automobile, seriously affect traffic safety.
Currently, the detection method of bituminous pavement construction depth mainly sands method, laser texture meter method and digitized map As three kinds of method.It is simple to sand method principle, measurement is convenient, but extremely time-consuming;Although laser texture meter method precision is higher, It is to need special equipment, it is expensive;Digital image method detects rapidly and efficiently, but is susceptible to ambient light and shines and road surface itself The interference of color.Obviously, there is take a long time or expensive and easy existing bituminous pavement construction depth detection method The problems such as being disturbed.
Therefore, study it is a kind of rapidly and efficiently, be not easy to be disturbed, the pitch construction depth detection method of price economy has must The property wanted.
Summary of the invention
In order to overcome the shortcomings of the prior art, the present invention provide it is a kind of rapidly and efficiently, be not easy to be disturbed and price warp The bituminous pavement construction depth detection method of Ji.
The technical scheme is that providing a kind of bituminous pavement construction depth detection method based on binocular vision, wrap Include following steps:
100. internal reference and the outer ginseng of acquisition left and right two video camera;
200. acquire the left color image and right color image of bituminous pavement using two video cameras in left and right respectively;
Left color image and right color image are processed into left gray level image and right gray level image by 300. respectively;
400., according to the internal reference for controlling two video cameras, carry out distortion school to left gray level image and right gray level image respectively Just, the left gray level image of the first correction and the first right gray level image of correction are obtained;
500. internal references according to the video camera of left and right two and outer ginseng, respectively to the left gray level image of the first correction and the first correction Right gray level image carries out three-dimensional correction, obtains the left gray level image of the second correction and the second right gray level image of correction;
The left gray level image of 600. pair of second correction and the second right gray level image of correction carry out Stereo matching, identify the second school Corresponding pixel on just left gray level image and the second right gray level image of correction, calculates parallax value d, is calculated and schemed according to parallax value d As upper each pixel under camera coordinate system apart from the height value of camera plane, generate one and include each picture on image The model matrix M of the pixel coordinate of vegetarian refreshments and its corresponding height value information, recovers the threedimensional model on road surface;
A threshold value is arranged in the difference coefficient value of two neighboring pixel height value in 700. pairs of model matrixs, determines three-dimensional Position with error value is modified using value of the median filtering window to three-dimensional matching error, eliminates Stereo matching mistake Value;
800. couples of model matrix M carry out plane fitting, and model matrix M and fit Plane are subtracted each other, when amendment acquires image Due to camera optical axis and road surface out of plumb bring shooting angle error;
900. calculate the construction depth of bituminous pavement.
Further include following steps in above-mentioned steps 300 as improvement of the present invention:
301. are pressed left color image and right color image by red (R), green (G), blue three channels (B) respectively After formula calculates, the left single channel gray level image and right single channel gray level image that are converted to;
F (x, y)=R (x, y) × 0.299+G (x, y) × 0.587+B (x, y) × 0.114;
Wherein, f (x, y) is the gray value of pixel, and R (x, y), G (x, y), B (x, y) are respectively the red, green of pixel Color, blue three channels value.
Further include following steps in above-mentioned steps 300 as improvement of the present invention:
302. carry out denoising to left single channel gray level image and right single channel gray level image using median filtering, obtain Left gray level image and right gray level image.
Further include following steps in above-mentioned steps 400 as improvement of the present invention:
401. determine distortion factor k according to the internal reference of video camera according to the following formula1、k2:
Wherein, k1、k2For the distortion factor of video camera, u, v are distortionless pixel coordinate, and x, y are distortionless continuous image Plain coordinate, u0、v0For the pixel coordinate of video camera principal point,For the pixel coordinate after distortion;
402. utilize resulting distortion of camera coefficient k1、k2As the following formula, respectively to left gray level image and right gray level image into Line distortion correction:
Further include following steps in above-mentioned steps 500 as improvement of the present invention:
501. determine the relative positional relationship between two video cameras in left and right:
502. are converted using Douglas Rodríguez, and relative rotation matrices are decomposed into left image and the respective synthesis of right image Spin matrix rl、rr
503. calculate the respective spin matrix R of left and right two imageslt、Rrt, by left image according to spin matrix RlRotation, will Right image is according to spin matrix RrIt is rotated, keeps the polar curve of two images horizontal, and pole is at infinity, complete three-dimensional school Just.
Further include following steps in above-mentioned steps 600 as improvement of the present invention:
After 601. using each pixel on half global registration algorithm (SGBM) traversal image, the second correction left side is identified The same pixel on gray level image and the second right gray level image of correction, and calculate the parallax value d of pixel;
Wherein, xl、xrThe respectively same pixel is on the left gray level image of the second correction and the second right gray level image of correction Pixel horizontal axis coordinate, zcFor scale factor;
602. calculate height value z of each pixel apart from camera plane with following formula;
Wherein, TxFor component of the relative translation vector T in X direction, unit mm indicates two video cameras in left and right Between horizontal distance, f is focal length of camera, and unit mm, d are parallax value, unit mm;
The height value of all pixels point is formed a model matrix M by 603., recovers the threedimensional model on road surface, wherein mould Type matrix M is one and contains the matrix of the pixel coordinate of each pixel and its corresponding pixel height value on image.
Further include following steps in above-mentioned steps 700 as improvement of the present invention:
701. are calculated as follows the single order difference coefficient of the corresponding height value of pixel, determine the position of Stereo matching erroneous pixel point It sets;
K=(zi+1-zi)/(xi+1-xi);
Wherein, k is the single order difference coefficient of pixel, xi, xi+1For i-th and the pixel abscissa of i+1 pixel, zi, zi+1 For i-th and the height value of i+1 pixel, unit mm.
702. point by single order difference coefficient greater than 1 is defined as the point of matching error, using 7 × 7 filter window, matching is wrong Point accidentally is placed on window center, and the height value of all pixels point in window is arranged from small to large, calculates picture in window The intermediate value of vegetarian refreshments height value exports replaced pixel height value with the value of intermediate value replacement matching error.
Further include following steps in above-mentioned steps 800 as improvement of the present invention:
The pixel height value of 801. couples of model matrix M does plane fitting, and the parameter a of fit Plane is calculated as follows out1, a2, a3
Wherein, xi, yiFor the pixel coordinate of ith pixel point, ziFor the height value of ith pixel point, n is total in matrix Pixel number.
Further include following steps in above-mentioned steps 800 as improvement of the present invention:
802. calculate the height value of revised each pixel using following formula, complete shooting angle error correction;
hi=zi-a1xi-a2yi-a3
Wherein, zi、hiHeight value respectively before shooting angle amendment with revised ith pixel point, unit mm.
As improvement of the present invention, in above-mentioned steps 900, the construction depth for calculating bituminous pavement is carried out as the following formula Hp, unit mm;
Wherein, hmaxFor the maximum value of pixel height value, unit mm, hiFor the height value of ith pixel point, unit The line number and columns for being model matrix M for mm, m and n.
The present invention acquires a left side for bituminous pavement by two video cameras in left and right due to using two video cameras in left and right respectively Color image and right color image, successively carry out gray proces, distortion correction, three-dimensional correction, the threedimensional model for recovering road surface, It eliminates the value of Stereo matching mistake, correct the shooting angle error of video camera, finally calculate the construction depth of bituminous pavement;It is examining It is illuminated by the light smaller with the influence of road surface self color during survey, is not only capable of measuring bituminous pavement construction depth, while can also The threedimensional model on road surface is recovered, can more intuitively reflect the technology status information of bituminous pavement, is referred to for testing staff; It overcomes laser texture meter method and needs disadvantage using special equipment and expensive, utilize a pair of common video camera mirror Detection can be completed in head;Overcome traditional-handwork sand method and it is electronic sand method detection speed it is relatively slow and by artificial subjective impact compared with Big disadvantage, have many advantages, such as rapidly and efficiently, be not easy to be disturbed, price economy and testing result it is more accurate.
Detailed description of the invention
Fig. 1 is process blocks schematic diagram of the invention.
Fig. 2 is that the present invention demarcates gridiron pattern used.
Planar structure schematic diagram when Fig. 3 is two camera operations in left and right in the present invention.
Fig. 4 is the left gray level image and right gray level image in the present invention after gray proces.
Fig. 5 is the left gray level image of the second correction and the second correction right side after completing distortion correction and three-dimensional correction in the present invention Gray level image.
Fig. 6 is the tested region road surface model figure in the present invention comprising Stereo matching error value.
Fig. 7 is the tested region road surface model figure after the value for eliminating Stereo matching mistake in the present invention.
Fig. 8 is that the tested region road surface model figure after video camera shooting angle error is corrected in the present invention.
Specific embodiment
In the description of the present invention, it is to be understood that, "center" in term, "upper", "lower", "front", "rear", " left side ", The orientation or positional relationship of instructions such as " right sides " is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of describing this hair Bright and simplified description, rather than the device or component of indication or suggestion meaning must have a particular orientation, with specific orientation Construction and operation, therefore be not considered as limiting the invention.In addition, term " first ", " second " are used for description purposes only, It is not understood to indicate or imply relative importance.
In the description of the present invention, it should be noted that unless otherwise clearly defined and limited, term " installation " " connects Connect ", " connected " shall be understood in a broad sense, for example, it may be being fixedly connected, be also possible to dismantling connection, or be integrally connected;It can be with It is mechanical connection, is also possible to be electrically connected;It can be directly connected, be also possible to indirectly connected through an intermediary, can be The connection of two component internals.For the ordinary skill in the art, above-mentioned term can be understood at this with concrete condition The concrete meaning of invention.
Referring to Figure 1, Fig. 1 it is revealed be a kind of bituminous pavement construction depth detection method based on binocular vision stream Cheng Tu, the bituminous pavement construction depth detection method based on binocular vision include the following steps:
100. internal reference and the outer ginseng of acquisition left and right two video camera;
200. acquire the left color image and right color image of bituminous pavement using two video cameras in left and right respectively;
Left color image and right color image are processed into left gray level image and right gray level image (refers to figure by 300. respectively 4);
400., according to the internal reference for controlling two video cameras, carry out distortion school to left gray level image and right gray level image respectively Just, the left gray level image of the first correction and the first right gray level image of correction are obtained;
500. internal references according to the video camera of left and right two and outer ginseng, respectively to the left gray level image of the first correction and the first correction Right gray level image carries out three-dimensional correction, obtains the left gray level image of the second correction and the second right gray level image (referring to Fig. 5) of correction;
The left gray level image of 600. pair of second correction and the second right gray level image of correction carry out Stereo matching (referring to Fig. 6), It identifies corresponding pixel on the left gray level image of the second correction and the second right gray level image of correction, calculates the parallax value of pixel D is generated under camera coordinate system apart from the height value of camera plane according to each pixel on parallax value d calculating image One includes the pixel coordinate of each pixel and its model matrix M of corresponding height value information on image, recovers road surface Threedimensional model;
A threshold value is arranged in the difference coefficient value of two neighboring pixel height value in 700. pairs of model matrixs, determines three-dimensional Position with erroneous point is modified using value of the median filtering window to three-dimensional matching error, eliminates Stereo matching mistake Value (refers to Fig. 7);
800. couples of model matrix M carry out plane fitting, and model matrix M and fit Plane are subtracted each other, when amendment acquires image Due to camera optical axis and road surface out of plumb bring shooting angle error (referring to Fig. 8);
900. calculate the construction depth of bituminous pavement.
In the above-mentioned steps 100 of this method, choose left and right two video cameras specification it is identical, imaging surface parallel co-planar and Row is aligned, and the video camera between left and right at a distance constructs the world as origin using measuring table and sit as binocular camera Mark system, binocular camera is demarcated using Zhang Zhengyou calibration method, solve two video cameras internal reference and outer ginseng.It needs to illustrate , the internal reference of video camera is only related with the specification of video camera, just uniquely determines in video camera factory, joins only later outside video camera Relative positional relationship between two video cameras in left and right is related.Video camera internal reference contains focal length of camera f, scale factor zc, video camera principle point location u0、v0, reacted the internal structure situation of video camera.Join outside video camera contain video camera relative to Relative to tessellated translation vector T, the relative position reacted between video camera is closed for tessellated spin matrix R and video camera System.
Further, video camera is demarcated, using Zhang Zhengyou calibration method, shoots one group of chessboard trrellis diagram using video camera Picture, chessboard table images are established as shown in Fig. 2, then identify the X-comers in digital picture using image recognition technology X-comers in digital picture and the corresponding relationship between X-comers in the real world, solve the interior of video camera Ginseng and outer ginseng.Include the following steps:
101. setting the coordinate of the next point of world coordinate system as P (X, Y, Z), the pixel coordinate of any on corresponding image For p (u, v), then the conversion process of world coordinates to pixel coordinate carries out as the following formula:
ax=zcJ;
ay=zcf;
Wherein, K is the internal reference matrix of video camera, u0、v0For the pixel coordinate of video camera principal point, ax、ayFor focal length of camera Parameter, R are video camera relative to tessellated 3 × 3 spin matrix, and T is video camera relative to tessellated 3 × 1 translation vector, zcFor scale factor, f is focal length of camera.
102. assuming that camera coordinate system and world coordinate system overlap, then world coordinate system is located at the plane of Z=0 On, Z=0, above formula conversion can be enabled are as follows:
H=K [r1 r2T];
In formula, the homography matrix that H is 3 × 3, r1, r2The respectively first row and secondary series of camera rotation matrix R.
Whole video camera internal references and outer ginseng are contained in homography matrix, and homography matrix H is write and does three column vectors Form [h1 h2 h3], using coordinate convert in constraint condition, solve homography matrix according to the following formula;
103. shooting one group of chessboard table images using binocular camera, gridiron pattern then is identified using image recognition technology Angle point, by under pixel coordinate system angle point p (u, v) and world coordinate system under angle point P (X, Y, Z) be used as given value, calculate Homography matrix solves whole video camera internal references and outer ginseng.
In the above-mentioned steps 200 of this method, as shown in figure 3, since the specification that will control two video cameras 1 is identical, at Image planes parallel co-planar and row alignment, video camera 1 at a distance is used as one group of binocular camera between left and right.Namely It says, the video camera 1 of left and right two in the present invention constitutes one group of binocular camera, by binocular camera with certain altitude right angle setting In on bituminous pavement 2, two video cameras 1 in left and right are controlled by computer and are taken pictures simultaneously, utilizes two width of left and right number obtained Image detects 2 construction depth of bituminous pavement, and the part that two 1 shooting areas of video camera in left and right overlap is tested Region 3.The video camera that binocular camera must select focal length fixed cannot select the video camera mirror with autozoom function Head.It should be noted that two video cameras 1 in left and right are mounted on 2 top of bituminous pavement, the optical axis of video camera 1 by certain altitude Orthogonal with bituminous pavement 2, the shooting area of two video cameras 1 in left and right overlaps and (refers to Fig. 3).
Further include following steps in the above-mentioned steps 300 of this method:
301. are pressed left color image and right color image by red (R), green (G), blue three channels (B) respectively After formula calculates, the left single channel gray level image and right single channel gray level image that are converted to;
F (x, y)=R (x, y) × 0.299+G (x, y) × 0.587+B (x, y) × 0.114;
Wherein, f (x, y) is the gray value of pixel, and R (x, y), G (x, y), B (x, y) are respectively the red, green of pixel Color, blue three channels value.
302. carry out denoising to left single channel gray level image and right single channel gray level image using median filtering, obtain Left gray level image and right gray level image (referring to Fig. 4).
Median filtering denoising refers to be slided on the image using 3 × 3 two-dimentional sleiding form of square, will be to The gray value of processing is placed on the centre of window, and gray value all in window is arranged from small to large, is calculated in window The intermediate value of gray value then determines that this gray value is different when gray value to be processed is equal to the maximum value or minimum value of gray value Often, gray value to be processed is replaced with the intermediate value of gray value, exports replaced gray value;It is on the contrary then be determined as normal value, it exports The gray value of script.
Further include following steps in the above-mentioned steps 400 of this method:
401. determine distortion factor k according to the internal reference of video camera according to the following formula1、k2:
Wherein, k1、k2For the distortion factor of video camera, u, v are distortionless pixel coordinate, and x, y are distortionless continuous image Plain coordinate, u0、v0For the pixel coordinate of video camera principal point,For the pixel coordinate after distortion.It should be noted that distortion Correction, which refers to, is corrected barrel distortion issuable in image or pincushion distortion, and the foundation of correction is the interior of video camera Ginseng.
402. utilize resulting distortion of camera coefficient k1、k2As the following formula, respectively to left gray level image and right gray level image into Line distortion correction:
In the above-mentioned steps 500 of this method, three-dimensional correction refers to that the relative position to two video cameras in left and right carries out school Just, two video cameras in left and right can have error during the installation process, and two imaging planes can not be substantially parallel coplanar and row pair It is quasi-, it is therefore desirable to which that two images are subjected to three-dimensional correction.Join with Bouguet algorithm using outside video camera obtained by calibrating to image It is corrected.Further include following steps:
501. determine the relative positional relationship between two video cameras in left and right, and formula is as follows:
R=RrRlT;
T=Tr-RTl
Wherein, R is 3 × 3 relative rotation matrices between two video cameras in left and right, and T is 3 between two video cameras in left and right × 1 relative translation vector, Rl、RrRespectively two video cameras in left and right are relative to tessellated 3 × 3 spin matrix, Tl、TrRespectively It is two video cameras in left and right relative to tessellated 3 × 1 translation vector;
502. are converted using Douglas Rodríguez, and relative rotation matrices are decomposed into left image and the respective synthesis of right image Spin matrix rl、rr
503. calculate the respective spin matrix R of left and right two imageslt、Rrt, by left image according to spin matrix RlRotation, will Right image is according to spin matrix RrIt is rotated, keeps the polar curve of two images horizontal, and pole is at infinity, complete three-dimensional school Just.Formula is as follows:
Rlt=Rr ectrl
Rrt=Rr ectrr
Rr ect=[e1 e2 e3];
e3=e1×e2
In formula, Rlt、RrtFor respective 3 × 3 spin matrix of left images respectively, R is 3 between two video cameras in left and right × 3 relative rotation matrices, T are 3 × 1 relative translation vectors between two video cameras in left and right, rl、”rrRespectively left images Synthesize spin matrix.It should be noted that the image after correction refers to Fig. 5.
Further include following steps in the above-mentioned steps 600 of this method:
After 601. using each pixel on half global registration algorithm (SGBM) traversal image, the second correction left side is identified The same pixel on gray level image and the second right gray level image of correction, and calculate the parallax value d of pixel, unit mm;
Wherein, xl、xrThe respectively same pixel is on the left gray level image of the second correction and the second right gray level image of correction Pixel horizontal axis coordinate, zcFor scale factor;
602. calculate height value z of each pixel apart from camera plane with following formula;
Wherein, TxFor component of the relative translation vector T in X direction, unit mm indicates two video cameras in left and right The distance between, f is focal length of camera, and unit mm, d are parallax value, unit mm;As it can be seen that parallax value d is bigger, then pixel Point is closer apart from video camera, and parallax value d is smaller, then pixel is remoter apart from video camera.
The height value of all pixels point is formed a model matrix M by 603., recovers the threedimensional model on road surface, wherein mould Type matrix M is one and contains the matrix of the pixel coordinate of each pixel and its corresponding pixel height value on image. Fig. 6 is referred to, Fig. 6 is the tested region road surface model figure comprising Stereo matching error value of generation after the completion of Stereo matching.
Further include following steps in the above-mentioned steps 700 of this method:
701. are calculated as follows the single order difference coefficient of the corresponding height value of pixel, determine the position of Stereo matching erroneous pixel point It sets;
K=(zi+1-zi)/(xi+1-xi);
Wherein, k is the single order difference coefficient of pixel, xi, xi+1For i-th and the pixel abscissa of i+1 pixel, zi, zi+1 For i-th and the height value of i+1 pixel, unit mm.
702. point by single order difference coefficient greater than 1 is defined as the point of matching error, using 7 × 7 filter window, matching is wrong Point accidentally is placed on window center, and pixel height value all in window is arranged from small to large, calculates picture in window The intermediate value of vegetarian refreshments height value exports replaced pixel height value, be eliminated solid with the value of intermediate value replacement matching error Road surface model figure after the value of matching error, adjust model display scale after it is shown in Figure 7.
Further include following steps in the above-mentioned steps 800 of this method:
The pixel height value of 801. couples of model matrix M does plane fitting, and the parameter a of fit Plane is calculated as follows out1, a2, a3
Wherein, xi, yiFor the pixel coordinate of ith pixel point, ziFor the height value of ith pixel point, unit mm, n are Total pixel number in matrix.
802. calculate the height value of revised each pixel using following formula, complete shooting angle error correction;
hi=zi-a1xi-a2yi-a3
Wherein, zi、hiHeight value respectively before shooting angle amendment with revised ith pixel point, unit mm, Revised road surface model figure is obtained, it is shown in Figure 8.
In the above-mentioned steps 900 of this method, the construction depth H for calculating bituminous pavement is carried out as the following formulap, unit mm;
Wherein, hmaxFor the maximum value of pixel height value, unit mm, hiFor the height value of ith pixel point, unit The line number and columns for being model matrix M for mm, m and n.
To verify effectiveness of the invention, the detection of bituminous pavement construction depth is carried out using the present invention and to 30 of acquisition Measuring point image information carries out analytical calculation, calculated result is compared with the testing result for sanding method by hand, as a result such as 1 institute of table Show.
Table 1:
From table 1 it follows that the test result maximum relative error of 30 measuring points is -8.45%, average relative error It is 3.04%, related coefficient 0.933, detection method of the invention is smaller compared to the testing result error for sanding method by hand, phase Pass degree is higher, has good detection effect.
The present invention is due to using two video cameras in left and right, by acquiring bituminous pavement respectively to two video cameras in left and right Left color image and right color image successively carry out gray proces, distortion correction, three-dimensional correction, the three-dimensional mould for recovering road surface Type, the value for eliminating Stereo matching mistake, the shooting angle error for correcting video camera, finally calculate the construction depth of bituminous pavement; It is illuminated by the light in the detection process smaller with the influence of road surface self color, is not only capable of measuring bituminous pavement construction depth, simultaneously The threedimensional model on road surface can also be recovered, can more intuitively reflect the technology status information of bituminous pavement, for testing staff With reference to;It overcomes laser texture meter method and needs disadvantage using special equipment and expensive, taken the photograph using a pair of common Detection can be completed in camera lens;It overcomes traditional-handwork and sands method and the electronic method detection speed that sands relatively slowly and by artificial subjectivity The shortcomings that being affected, have many advantages, such as rapidly and efficiently, be not easy to be disturbed, price economy and testing result it is more accurate.
It should be noted that explaining in detail for the respective embodiments described above, purpose, which is only that, solves the present invention It releases, in order to be able to preferably explain the present invention, still, these descriptions cannot be with any explanation at being to limit of the invention System, in particular, each feature described in various embodiments can also mutual any combination, to form other implementations Mode, in addition to there is clearly opposite description, these features should be understood to can be applied in any one embodiment, and simultaneously It is not only limited to described embodiment.

Claims (10)

1. a kind of bituminous pavement construction depth detection method based on binocular vision, which comprises the steps of:
100. internal reference and the outer ginseng of acquisition left and right two video camera;
200. acquire the left color image and right color image of bituminous pavement using two video cameras in left and right respectively;
Left color image and right color image are processed into left gray level image and right gray level image by 300. respectively;
400. obtain according to the internal reference of two video cameras in left and right respectively to left gray level image and right gray level image progress distortion correction To the left gray level image of the first correction and the first right gray level image of correction;
500. internal references according to the video camera of left and right two and outer ginseng, respectively to the left gray level image of the first correction and the first right ash of correction It spends image and carries out three-dimensional correction, obtain the left gray level image of the second correction and the second right gray level image of correction;
The left gray level image of 600. pair of second correction and the second right gray level image of correction carry out Stereo matching, identify the second correction left side Corresponding pixel on gray level image and the second right gray level image of correction, calculates parallax value d, is calculated on image according to parallax value d Each pixel, apart from the height value of camera plane, generates one and includes each pixel on image under camera coordinate system Pixel coordinate and its corresponding height value information model matrix M, recover the threedimensional model on road surface;
Threshold value is arranged in the difference coefficient value of height value between two neighboring pixel in 700. pairs of model matrixs, determines Stereo matching mistake The accidentally position of value, is modified using value of the median filtering window to three-dimensional matching error, eliminates the value of Stereo matching mistake;
800. couples of model matrix M carry out plane fittings, and model matrix M and fit Plane are subtracted each other, when amendment acquisition image due to Camera optical axis bring shooting angle error not exclusively vertical with road surface;
900. calculate the construction depth of bituminous pavement.
2. the bituminous pavement construction depth detection method according to claim 1 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 300:
301. are counted left color image and right color image as the following formula by red (R), green (G), blue three channels (B) respectively After calculation, left single channel gray level image and right single channel gray level image are converted to;
F (x, y)=R (x, y) × 0.299+G (x, y) × 0.587+B (x, y) × 0.114;
Wherein, f (x, y) is the gray value of pixel, R (x, y), G (x, y), B (x, y) be respectively the red of pixel, green, The value in blue three channels.
3. the bituminous pavement construction depth detection method according to claim 2 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 300:
302. carry out denoising to left single channel gray level image and right single channel gray level image using median filtering, obtain left ash Spend image and right gray level image.
4. the bituminous pavement construction depth detection method according to claim 1 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 400:
401. determine distortion factor k according to the internal reference of video camera according to the following formula1、k2:
Wherein, k1、k2For the distortion factor of video camera, u, v are distortionless pixel coordinate, and x, y are distortionless contiguous pixels Coordinate, u0、v0For the pixel coordinate of video camera principal point,For the pixel coordinate after distortion;
402. utilize resulting distortion of camera coefficient k1、k2As the following formula, left gray level image and right gray level image are carried out respectively abnormal Become correction:
5. the bituminous pavement construction depth detection method according to claim 1 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 500:
501. determine the relative positional relationship between two video cameras in left and right:
502. are converted using Douglas Rodríguez, and relative rotation matrices are decomposed into left image and the respective synthesis of right image rotates Matrix rl、rr
503. calculate the respective spin matrix R of left and right two imageslt、Rrt, by left image according to spin matrix RlRotation, by right figure As according to spin matrix RrIt is rotated, keeps the polar curve of two images horizontal, and pole is at infinity, complete three-dimensional correction.
6. the bituminous pavement construction depth detection method according to claim 1 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 600:
After 601. using each pixel on half global registration algorithm (SGBM) traversal image, the left gray scale of the second correction is identified The same pixel on image and the second right gray level image of correction, and calculate the parallax value d of pixel;
Wherein, xl、xrPicture of the respectively same pixel on the left gray level image of the second correction and the second right gray level image of correction Plain horizontal axis coordinate, zcFor scale factor;
602. calculate height value z of each pixel apart from camera plane with following formula;
Wherein, TxFor component of the relative translation vector T in X direction, unit mm is indicated between two video cameras in left and right Horizontal distance, f are focal length of camera, and unit mm, d are parallax value, unit mm;
The height value of all pixels point is formed a model matrix M by 603., recovers the threedimensional model on road surface, wherein model square Battle array M is one and contains the matrix of the pixel coordinate of each pixel and its corresponding pixel height value on image.
7. the bituminous pavement construction depth detection method according to claim 1 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 700:
701. are calculated as follows the single order difference coefficient of the corresponding height value of neighbor pixel, determine the position of Stereo matching erroneous pixel point It sets;
K=(zi+1-zi)/(xi+1-xi);
Wherein, k is the single order difference coefficient of pixel, xi, xi+1For i-th and the pixel abscissa of i+1 pixel, zi, zi+1It is The height value of i and i+1 pixel, unit mm;
702. point by single order difference coefficient greater than 1 is defined as the point of matching error, using 7 × 7 filter window, matching error Point is placed on window center, and the height value of all the points in window is arranged from small to large, is calculated in window inner height value Value exports replaced pixel height value with the value of intermediate value replacement matching error.
8. the bituminous pavement construction depth detection method according to claim 1 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 800:
The pixel height value of 801. couples of model matrix M does plane fitting, and the parameter a of fit Plane is calculated as follows out1, a2, a3
Wherein, xi, yiFor the pixel coordinate of ith pixel point, ziFor the height value of ith pixel point, n is picture total in matrix Vegetarian refreshments number.
9. the bituminous pavement construction depth detection method according to claim 8 based on binocular vision, which is characterized in that Further include following steps in above-mentioned steps 800:
802. calculate the height value of revised each pixel using following formula, complete shooting angle error correction;
hi=zi-a1xi-a2yi-a3
Wherein, zi、hiHeight value respectively before shooting angle amendment with revised ith pixel point, unit mm.
10. the bituminous pavement construction depth detection method according to claim 1 based on binocular vision, which is characterized in that In above-mentioned steps 900, the construction depth H for calculating bituminous pavement is carried out as the following formulap, unit mm;
Wherein, hmaxFor the maximum value of pixel height value, unit mm, hiFor the height value of ith pixel point, unit mm, m The line number and columns for being model matrix M with n.
CN201910053244.9A 2019-01-21 2019-01-21 Asphalt pavement structure depth detection method based on binocular vision Active CN109919856B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910053244.9A CN109919856B (en) 2019-01-21 2019-01-21 Asphalt pavement structure depth detection method based on binocular vision
CN202310309976.6A CN116342674A (en) 2019-01-21 2019-01-21 Method for calculating asphalt pavement construction depth by three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910053244.9A CN109919856B (en) 2019-01-21 2019-01-21 Asphalt pavement structure depth detection method based on binocular vision

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310309976.6A Division CN116342674A (en) 2019-01-21 2019-01-21 Method for calculating asphalt pavement construction depth by three-dimensional model

Publications (2)

Publication Number Publication Date
CN109919856A true CN109919856A (en) 2019-06-21
CN109919856B CN109919856B (en) 2023-02-28

Family

ID=66960505

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910053244.9A Active CN109919856B (en) 2019-01-21 2019-01-21 Asphalt pavement structure depth detection method based on binocular vision
CN202310309976.6A Pending CN116342674A (en) 2019-01-21 2019-01-21 Method for calculating asphalt pavement construction depth by three-dimensional model

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310309976.6A Pending CN116342674A (en) 2019-01-21 2019-01-21 Method for calculating asphalt pavement construction depth by three-dimensional model

Country Status (1)

Country Link
CN (2) CN109919856B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091063A (en) * 2019-11-20 2020-05-01 北京迈格威科技有限公司 Living body detection method, device and system
CN111553878A (en) * 2020-03-23 2020-08-18 四川公路工程咨询监理有限公司 Method for detecting paving uniformity of asphalt pavement mixture based on binocular vision
CN111862234A (en) * 2020-07-22 2020-10-30 中国科学院上海微系统与信息技术研究所 Binocular camera self-calibration method and system
CN112819820A (en) * 2021-02-26 2021-05-18 大连海事大学 Pavement asphalt repair detection method based on machine vision

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649454A (en) * 2024-01-29 2024-03-05 北京友友天宇系统技术有限公司 Binocular camera external parameter automatic correction method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090305820A1 (en) * 2007-09-18 2009-12-10 Scott Denton Golf gps device
CN102635056A (en) * 2012-04-01 2012-08-15 长安大学 Measuring method for construction depth of asphalt road surface
CN104775349A (en) * 2015-02-15 2015-07-15 云南省交通规划设计研究院 Tester and measuring method for structural depth of large-porosity drainage asphalt pavement
CN105205822A (en) * 2015-09-21 2015-12-30 重庆交通大学 Real-time detecting method for asphalt compact pavement segregation degree
CN105225482A (en) * 2015-09-02 2016-01-06 上海大学 Based on vehicle detecting system and the method for binocular stereo vision
CN106845424A (en) * 2017-01-24 2017-06-13 南京大学 Road surface remnant object detection method based on depth convolutional network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090305820A1 (en) * 2007-09-18 2009-12-10 Scott Denton Golf gps device
CN102635056A (en) * 2012-04-01 2012-08-15 长安大学 Measuring method for construction depth of asphalt road surface
CN104775349A (en) * 2015-02-15 2015-07-15 云南省交通规划设计研究院 Tester and measuring method for structural depth of large-porosity drainage asphalt pavement
CN105225482A (en) * 2015-09-02 2016-01-06 上海大学 Based on vehicle detecting system and the method for binocular stereo vision
CN105205822A (en) * 2015-09-21 2015-12-30 重庆交通大学 Real-time detecting method for asphalt compact pavement segregation degree
CN106845424A (en) * 2017-01-24 2017-06-13 南京大学 Road surface remnant object detection method based on depth convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AKHTAR HANIF等: ""Inflight helicopter blade track measurement using computer vision"", 《2014 IEEE REGION 10 SYMPOSIUM》 *
何力: ""基于数字图像技术的沥青混凝土构造深度检测研究"", 《北方交通》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091063A (en) * 2019-11-20 2020-05-01 北京迈格威科技有限公司 Living body detection method, device and system
CN111091063B (en) * 2019-11-20 2023-12-29 北京迈格威科技有限公司 Living body detection method, device and system
CN111553878A (en) * 2020-03-23 2020-08-18 四川公路工程咨询监理有限公司 Method for detecting paving uniformity of asphalt pavement mixture based on binocular vision
CN111862234A (en) * 2020-07-22 2020-10-30 中国科学院上海微系统与信息技术研究所 Binocular camera self-calibration method and system
CN111862234B (en) * 2020-07-22 2023-10-20 中国科学院上海微系统与信息技术研究所 Binocular camera self-calibration method and system
CN112819820A (en) * 2021-02-26 2021-05-18 大连海事大学 Pavement asphalt repair detection method based on machine vision
CN112819820B (en) * 2021-02-26 2023-06-16 大连海事大学 Road asphalt repairing and detecting method based on machine vision

Also Published As

Publication number Publication date
CN109919856B (en) 2023-02-28
CN116342674A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN109919856A (en) Bituminous pavement construction depth detection method based on binocular vision
CN112669393B (en) Laser radar and camera combined calibration method
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
CN102364299B (en) Calibration technology for multiple structured light projected three-dimensional profile measuring heads
CN104240262B (en) Calibration device and calibration method for outer parameters of camera for photogrammetry
CN101692283B (en) Method for on-line self-calibration of external parameters of cameras of bionic landing system of unmanned gyroplane
CN106978774B (en) A kind of road surface pit slot automatic testing method
CN107025670A (en) A kind of telecentricity camera calibration method
CN109443245B (en) Multi-line structured light vision measurement method based on homography matrix
CN110057295A (en) It is a kind of to exempt from the monocular vision plan range measurement method as control
CN110375648A (en) The spatial point three-dimensional coordinate measurement method that the single camera of gridiron pattern target auxiliary is realized
CN101216296A (en) Binocular vision rotating axis calibration method
CN103994732B (en) A kind of method for three-dimensional measurement based on fringe projection
CN106023193B (en) A kind of array camera observation procedure detected for body structure surface in turbid media
CN106447733B (en) Method, system and device for determining cervical vertebra mobility and moving axis position
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN104568963A (en) Online three-dimensional detection device based on RGB structured light
CN109859269B (en) Shore-based video auxiliary positioning unmanned aerial vehicle large-range flow field measuring method and device
CN110966956A (en) Binocular vision-based three-dimensional detection device and method
CN112305576A (en) Multi-sensor fusion SLAM algorithm and system thereof
CN107610183A (en) New striped projected phase height conversion mapping model and its scaling method
CN110047111A (en) A kind of airplane parking area shelter bridge butting error measurement method based on stereoscopic vision
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN112348775A (en) Vehicle-mounted all-round-looking-based pavement pool detection system and method
CN110044266B (en) Photogrammetry system based on speckle projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant