CN111091076A - Tunnel limit data measuring method based on stereoscopic vision - Google Patents

Tunnel limit data measuring method based on stereoscopic vision Download PDF

Info

Publication number
CN111091076A
CN111091076A CN201911217935.4A CN201911217935A CN111091076A CN 111091076 A CN111091076 A CN 111091076A CN 201911217935 A CN201911217935 A CN 201911217935A CN 111091076 A CN111091076 A CN 111091076A
Authority
CN
China
Prior art keywords
coordinate system
image
camera
point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911217935.4A
Other languages
Chinese (zh)
Other versions
CN111091076B (en
Inventor
王庆
郑江滨
盛世勇
安天平
裴宏波
李红心
周果清
王雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
China Railway Lanzhou Group Co Ltd
Original Assignee
Northwestern Polytechnical University
China Railway Lanzhou Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, China Railway Lanzhou Group Co Ltd filed Critical Northwestern Polytechnical University
Priority to CN201911217935.4A priority Critical patent/CN111091076B/en
Publication of CN111091076A publication Critical patent/CN111091076A/en
Application granted granted Critical
Publication of CN111091076B publication Critical patent/CN111091076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a tunnel clearance data measuring method based on stereoscopic vision, which comprises the steps of designing a railway tunnel detection vehicle, establishing and calibrating a relative coordinate system, converting the coordinate system after laser band image preprocessing, stereoscopic correction, optical band matching and three-dimensional calculation, correcting a result through a rotation and translation matrix measured by a vehicle body pose estimation system, and obtaining standard clearance data by utilizing interpolation. The method can acquire large-range and dense section data at the same time, has the advantages of high efficiency and wide detection range, and has good environmental robustness; the data of a plurality of sections of the same tunnel can be acquired more efficiently, and the precision of the final limit result is improved; the calculation cost required by stereo matching in the binocular calculation process is greatly reduced, the accuracy of the calculation result of the limit data is further improved, and the capability of measuring the limit data under extremely low illumination is provided.

Description

Tunnel limit data measuring method based on stereoscopic vision
Technical Field
The invention relates to the field of image processing, computer vision and tunnel clearance detection, in particular to a tunnel clearance data measuring method.
Background
With the rapid development of the economy of China, railway transportation plays a very important role in the development of the economy of China, and a tunnel is a common road section in railway transportation.
On the one hand, China is a country with many geological disasters, and the invasion of natural disasters such as heavy rain, torrential flood and the like to tunnels seriously threatens the safety of railway traffic. If the related limit information cannot be grasped in time, a serious accident endangering the driving safety can happen.
On the other hand, with the increasing demand of large transportation of railway, the potential safety hazard brought by the transportation of over-limit goods is not negligible, and limit data needs to be periodically managed and updated to ensure the safety of goods transportation.
Therefore, the important significance of timely mastering the limit data of the tunnel on the safe development of production and life is achieved.
In the early stage, a profiler is often used for accurately measuring limit data, a laser ranging technology without a cooperative target and a precise angle measuring technology are adopted, a polar coordinate measuring method is combined with a computer technology, high-precision limit data can be obtained, but the profiler has serious defects, on one hand, due to the fact that manual operation is needed, accidental errors are easily caused, on the other hand, intensive and large-range limit data cannot be obtained at the same time, and the measuring efficiency is low. In recent years, many scholars have conducted extensive research into how to more efficiently measure boundary data. The detection method has huge calculation cost because the detection method directly acts on the 3D point cloud of a plurality of photographed cross sections, can only identify whether an invasion target exists or not, and cannot provide accurate cross section data, and in addition, the result is highly coupled with environmental factors because of lack of an active light source. The method is characterized in that a circular monocular camera assembled on a vehicle body is used for continuously shooting laser light band images projected on a tunnel wall, dynamic section data acquisition is achieved, and results are calculated through a television measurement principle, so that the distance is essentially measured, the laser light bands are required to be kept on the same section, the accuracy of the results cannot be guaranteed, the measurement range is limited, and the working time is limited at night.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a tunnel boundary data measuring method based on stereoscopic vision. In order to solve the problems of low precision and low efficiency in the prior art, the invention provides the railway tunnel clearance data measuring method which has high detection efficiency and good precision and can be suitable for measuring dynamic section data under high-speed motion.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
s1: designing a railway tunnel detection vehicle carrying an active light source, a binocular camera set and a vehicle body attitude estimation system, so that the railway tunnel detection vehicle has the capability of dynamically acquiring limit data by using a vision system;
the active light source is a near-infrared light source, the binocular camera set is uniformly arranged around the vehicle body in a surrounding mode, the vehicle body attitude estimation system is the binocular camera set, the three-dimensional result of the rail is measured and calculated by a stereoscopic vision method, and the rail three-dimensional result is compared with a standard rail file for estimation;
s2: establishing and calibrating a relative coordinate system;
constructing an image pixel coordinate system, an image physical coordinate system and a camera coordinate system, establishing a projection conversion relation from an image space to a three-dimensional space by utilizing homogeneous expression of coordinate points, and constructing an external parameter in-line model between a camera distortion model and a camera; constructing a rail plane center coordinate system, and establishing an external reference conversion relation from a camera coordinate system to the rail plane center coordinate system; estimating the existing model parameters by a calibration method; the method comprises the following specific steps:
s2.1: based on a camera projection model in a homogeneous coordinate system;
constructing a space point coordinate (X) under a camera coordinate system by taking a left camera of a binocular camera set as a reference pointc,Yc,Zc)TCorresponding to two-dimensional coordinate point (x, y,1) in the image physical coordinate systemTPerspective projection transformation model of (1):
Figure BDA0002300010390000021
wherein f is the physical focal length;
according to the camera imaging model, a coordinate point (x, y,1) in an image physical coordinate system is constructedTTo the image pixel coordinate system corresponding coordinate point (u, v,1)TThe conversion relationship of (1):
Figure BDA0002300010390000022
where dx, dy are the pixel size, cx,cyIs the principal point offset in the x, y direction;
simultaneous (1) and (2) to construct the coordinates (X) of the spatial points in the camera coordinate systemc,Yc,Zc)TWith corresponding image plane pixel homogeneous coordinates (u, v,1)TThe internal reference mapping relationship of (1):
Figure BDA0002300010390000031
wherein f isx=f/dx,fyF/dy is the focal length of the pixel on the x and y scales respectively; setting an XOY plane of a world coordinate system as a calibration object plane, wherein the calibration object plane is a plane with Z being 0, and constructing pixel coordinates (u, v,1)TTo world point coordinate (X)w,Yw,0,1)TProjective transformation relation of (1):
Figure BDA0002300010390000032
wherein R ═ R1r2r3]Is a rotation matrix, t is a translation vector;
s2.2: establishing a lens distortion model;
dividing the image shot by the camera into radial distortion and tangential distortion; the radial distortion is caused by the manufacturing process of the lens, the distortion degree is coupled with the distance from the edge of the lens, and for a two-dimensional point (x, y) in an image physical coordinate system, the following model is established for the coordinate point (x) after the radial distortiondistorted,ydistorted) The description is that:
Figure BDA0002300010390000033
wherein k is1,k2,k3Is a radial distortion factor;
tangential distortion is caused by errors of installation positions of a lens and a CMOS or CCD, and for a two-dimensional point (x, y) in an image physical coordinate system, the following model is established for a radially distorted coordinate point (x)distorted,ydistorted) The description is that:
Figure BDA0002300010390000034
wherein p is1,p2Is a tangential distortion factor;
therefore, in the simultaneous connection of (3) and (4), the following mathematical relationship is established to describe the whole distortion model:
Figure BDA0002300010390000035
wherein r is2=x2+y2
S2.3: establishing an external parameter inline model of a binocular camera set;
defining the external parameter of the calibrated left camera as Rl,TlAnd the external parameter after the right camera calibration is Rr,TrAnd constructing an external reference conversion relation R, T of two camera planes in the binocular camera set:
Figure BDA0002300010390000041
s2.4: establishing an external reference model of a coordinate system from a binocular system to an orbit plane;
obtaining a result of binocular calculation under an orbit plane centering coordinate system by off-line calibrating the external reference conversion relation; the center of a rail surface where a vehicle body is located is used as an origin point, the center faces a living area, the left horizontal side is the positive direction of a y axis, the height direction is an X axis, the motion direction of the vehicle body is the positive direction of a z axis, a rail plane centering coordinate system is established, and a three-dimensional point (X) under a binocular coordinate system is establishedc,Yc,Zc,1)TMid-coordinate system (X) of orbital planew,Yw,Zw,1)TOf (2) an external reference transformation relation Rz,Tz
Figure BDA0002300010390000042
All section data calculated under the coordinate system of the binocular camera set need to be converted into the orbit plane centering coordinate system for recalculation and fusion;
s2.5: calibrating model parameters;
estimating the established model parameters by using a calibration method: firstly, calculating respective internal parameters K of each group of binocular cameras by using infrared lamp calibration plate images under different postures through a Zhang calibration methodlAnd KrDistortion parameter (k) of each camera1,k2,k3,p1,p2) And conversion between camerasA relationship R, T; then, the R of each car body camera set under the orbit plane centering coordinate system is calculated through a data pair formed by the measurement result of the artificial profiler and the binocular calculation resultz,TzSimultaneously, the mapping relation between the measurement data of the level meter and the binocular data is utilized, and the R of each vehicle bottom camera set under the rail plane centering coordinate system is calculated through the least square methodz,TzFinally, a calibration file of the whole railway section detection system is formed;
s3: preprocessing laser band images;
before the image data is calculated, preprocessing operations of image correction, image enhancement, image edge detection and light band center detection are carried out in sequence;
wherein the image correction is to correct the image shot by each camera by using the calibrated distortion parameters, and each pixel point (u, v,1) of the undistorted imageTFirstly, the coordinates (x, y,1) of the image in the physical coordinate system are calculated by using the conversion relation of the formula (3)T
Figure BDA0002300010390000051
And calculating the coordinates (x) of the distorted point in the image physical coordinate system by using a distortion model for the obtained coordinatesd,yd,1):
Figure BDA0002300010390000052
Finally, the distorted point is converted into a pixel coordinate system to obtain a corresponding point (u) on the distorted imaged,vd,1):
Figure BDA0002300010390000053
Because the pixel points are integer points, the calculated coordinates must be interpolated to obtain the final integer coordinates (u)d,vd1), for undistorted images, given the distorted image already knownEvery point (u, v,1) can find the corresponding point (u) on the undistorted imaged,vd1), namely, the correction of the distorted image can be completed;
s4: performing three-dimensional correction;
due to the instability of the train, the image plane of a binocular camera set assembled on the train cannot be ensured to be completely aligned, in order to reduce the problems of precision loss and low efficiency of two-dimensional stereo matching, the binocular camera needs to be subjected to stereo correction, and the completely aligned image plane can be subjected to one-dimensional matching by utilizing epipolar constraint; the method for correcting the image plane in the three-dimensional mode by using the Bouguet algorithm mainly comprises the following steps:
s4.1: constructing a rotation matrix r of left and right camera image planesl,rr
The left and right images are rotated by half respectively by utilizing R between the binocular cameras to construct a rotation matrix Rl,rrThe following were used:
Figure BDA0002300010390000054
wherein R is a calibrated external reference conversion relation;
s4.2.: construction matrix RrectRealizing polar line alignment;
constructing a matrix R transforming poles to infinity and aligning polar lines in parallelrect
Figure BDA0002300010390000061
Wherein e1,e2,e3The structure is as follows:
Figure BDA0002300010390000062
wherein T is a calibrated translation vector;
s4.3: carrying out three-dimensional correction;
the left and right image planes are subjected to stereo rectification,
Figure BDA0002300010390000063
wherein xl,xrRespectively, points of the image plane in the coordinate system of the camera before correction,
Figure BDA0002300010390000064
respectively correcting the points of the image plane under the corrected camera coordinate system, and finally correcting the image plane under the pixel coordinate system through the internal reference relation;
s5: matching light bands;
the idea of seed filling is utilized to carry out light band matching, and the detailed steps are as follows:
s5.1: scanning the light band image to form a seed queue;
scanning light band images from top to bottom, if the left image and the right image of the current line have only one light band point, considering the current matching, adding the current matching into one Group, and if the size of one Group is larger than a threshold value (20), taking the Group as a seed to be added into a queue;
s5.2: extracting seeds and matching;
and after the image is scanned for one time, sequentially extracting seeds from the queue, and expanding the neighborhood of each seed upwards and downwards respectively: if the current row has a plurality of light band points, selecting the closest point, if the distance between the closest point and the seed point is less than a threshold value (3 pixels), adding the closest point into the seed, and continuously scanning and filling upwards and downwards;
s6: three-dimensional calculation;
establishing a reprojection matrix Q from four-dimensional information consisting of two-dimensional homogeneous points of images and parallaxes to three-dimensional homogeneous points through the parallax relation between the matching points and the camera internal reference matrix, and realizing the conversion of the three-dimensional points of the captured tunnel wall from a two-dimensional space pixel coordinate system to a three-dimensional space camera coordinate system;
through the parallax relation between the matching points and the camera internal reference matrix, the two-dimensional points can be re-projected into three dimensions, and the re-projection matrix is as follows:
Figure BDA0002300010390000071
wherein c isx,cyIs the principal point coordinate of the left camera, f is the focal length of the left camera, TxAs is the amount of translation in the x-direction between the two cameras,
Figure BDA0002300010390000072
representing the x coordinate of the principal point in the right image. Given a two-dimensional homogeneous point (x, y,1) on the left image physical coordinate system and its associated disparity d, this point can be projected into three dimensions:
Figure BDA0002300010390000073
the coordinates of the three-dimensional points are
Figure BDA0002300010390000074
The derivation is:
Figure BDA0002300010390000075
the final result of the three-dimensional points is:
Figure BDA0002300010390000076
s7: converting a coordinate system;
converting the three-dimensional result into a rail plane center coordinate system through the calibrated R and T from each binocular camera coordinate system to the rail plane center coordinate system;
s8: correcting the result;
correcting the result through a rotation and translation matrix measured by a vehicle body pose estimation system, and obtaining standard limit data by utilizing interpolation;
correcting the result by using a rotation translation matrix measured by a vehicle body pose estimation system, setting the vehicle body inclination angle estimated by the vehicle body pose estimation system as theta, the right deviation as delta x, the uncorrected coordinate values of a track plane centering coordinate system as (x ', y'), and the corrected coordinate as (x, y), and correcting by the following formula:
Figure BDA0002300010390000081
the corrected intensive limit data under different heights can be obtained, the limit half-width value under the standard height is obtained through an interpolation method, and limit data calculation is completed.
The invention has the beneficial effects that:
1. compared with the traditional section data measuring methods such as polar coordinate measurement and laser scanning, the method adopts an advanced computer vision acquisition and calculation method, can acquire large-range and dense section data at the same time, has the advantages of high efficiency and wide detection range, reduces interpolation errors caused by insufficient data quantity, and has the advantage of high precision.
2. Compared with a television measuring principle, the binocular stereo vision can acquire real three-dimensional information in an image, can correct the problem of precision loss caused by the fact that a laser device shifts to cause that a light band is located on different sections, and has good environmental robustness.
3. The method is suitable for dynamic section measurement in the high-speed operation process, can more efficiently acquire data of a plurality of sections of the same tunnel, and improves the precision of the final limit result.
4. The invention adopts an active strip light source and seed matching algorithm, greatly reduces the calculation cost required by stereo matching in the binocular calculation process, further improves the accuracy of the calculation result of the limit data, and simultaneously provides the capability of measuring the limit data under extremely low illumination.
Drawings
FIG. 1 is a flow chart of boundary data calculation based on stereo vision measurement in a specific application example of the present invention.
Fig. 2 is a schematic diagram of machine vision as used in the present invention.
Fig. 3 is a data processing flow diagram for band matching during binocular stereo vision computation according to the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention is based on a vehicle-mounted vision acquisition device, utilizes the characteristic that the relative position of a camera and the bottom of a vehicle is unchanged in the running process of the vehicle body to establish the external reference conversion relation between the coordinate system of the rail plane center and the coordinate system of the camera, and utilizes the estimation result of the vehicle body posture to correct, thereby being capable of dynamically measuring the section data under the coordinate system of the rail plane center on the basis of the binocular vision measurement technology.
The invention provides a tunnel clearance data measuring method based on stereoscopic vision, which mainly comprises the steps of acquisition device design, relative coordinate system establishment and calibration, laser band image preprocessing, stereoscopic correction, optical band matching, three-dimensional calculation, coordinate system conversion and result correction. The method comprises the following steps:
s1: designing a collecting device;
the railway tunnel detection vehicle provided with the active light source, the binocular camera set and the vehicle body posture estimation system is designed to have the capability of dynamically acquiring limit data by using a vision system.
The active light source adopts 9 paths of 808nm near-infrared strip light sources which are annularly arranged on the same cross section of the vehicle body, wherein 7 paths of the vehicle body are projected on the whole cross section, and 2 paths of the vehicle bottom are respectively projected on the inner edges of the steel rails at two sides.
18 paths of binocular camera sets are uniformly distributed on the same section of the laser in a surrounding mode, wherein 16 paths are used for collecting light band data of the whole section, and 2 paths of the vehicle body are used for collecting steel rail light band data to estimate the attitude of the vehicle body.
The vehicle body attitude estimation system can preferably select an attitude measurement sensor, wherein a rail gauge and a rail height difference are calculated by using a binocular measurement technology through a rail laser band image acquired by a vehicle bottom binocular camera set, and compared with a standard rail file, the vehicle running attitude under any section can be calculated.
S2: establishing and calibrating a relative coordinate system;
s2.1: based on a camera projection model in a homogeneous coordinate system;
the invention uses two eyesTaking the left camera of the camera set as a reference point, and constructing a space point coordinate (X) under a camera coordinate systemc,Yc,Zc)TCorresponding to two-dimensional coordinate point (x, y,1) in the image physical coordinate systemTPerspective projection transformation model of (1):
Figure BDA0002300010390000091
where f is the physical focal length.
According to the camera imaging model, a coordinate point (x, y,1) in an image physical coordinate system is constructedTTo the image pixel coordinate system corresponding coordinate point (u, v,1)TThe conversion relationship of (1):
Figure BDA0002300010390000101
where dx, dy are the pixel size, cx,cyIs the principal point offset in the x, y direction.
Simultaneous (1) and (2) to construct the coordinates (X) of the spatial points in the camera coordinate systemc,Yc,Zc)TWith corresponding image plane pixel homogeneous coordinates (u, v,1)TThe internal reference mapping relationship of (1):
Figure BDA0002300010390000102
wherein f isx=f/dx,fyF/dy is the focal length of the pixel on the x, y scale, respectively. Meanwhile, for convenience of subsequent calibration, an XOY plane of the world coordinate system is set as a calibration object plane, and the calibration object plane is a plane with Z being 0, and pixel coordinates (u, v,1) are constructedTTo world point coordinate (X)w,Yw,0,1)TProjective transformation relation of (1):
Figure BDA0002300010390000103
wherein R ═ R1r2r3]Is the rotation matrix and t is the translation vector.
S2.2: establishing a lens distortion model;
lens distortion, which is largely divided into radial and tangential distortion, is inevitably present in images taken by the camera.
The radial distortion is caused by the manufacturing process of the lens, the distortion degree is coupled with the distance from the edge of the lens, and for two-dimensional points (x, y) in an image physical coordinate system, the invention establishes the following model for the coordinate points (x) after the radial distortiondistorted,ydistorted) The description is that:
xdistorted=x(1+k1r2+k2r4+k3r6)
ydistorted=y(1+k1r2+k2r4+k3r6) (25)
wherein k is1,k2,k3Is the radial distortion factor.
The tangential distortion is caused by the installation position error of the lens and the CMOS or CCD, and for a two-dimensional point (x, y) under the image physical coordinate system, the invention establishes the following model for the coordinate point (x) after the radial distortiondistorted,ydistorted) In the course of the description, it is,
xdistorted=x+2p1xy+p2(r2+2x2)
ydistorted=y+2p2xy+p1(r2+2y2) (26)
wherein p is1,p2Is the tangential distortion factor.
Therefore, in the simultaneous connection of (3) and (4), the following mathematical relationship is established to describe the whole distortion model:
xdistorted=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)
ydistorted=y(1+k1r2+k2r4+k3r6)+2p2xy+p1(r2+2y2) (27)
wherein r is2=x2+y2
S2.3: establishing an external parameter inline model of a binocular camera set;
defining the external parameter of the calibrated left camera as Rl,TlAnd the external parameter after the right camera calibration is Rr,TrAnd the external parameter conversion relation R, T of two camera planes in the binocular camera set can be established:
Figure BDA0002300010390000111
T=Tr-R·Tl(28)
s2.4: establishing an external reference model of a coordinate system from a binocular system to an orbit plane;
because the relative positions of the carriage and the bottom of the vehicle are static when the vehicle body moves, the result of binocular calculation under a coordinate system in the rail plane is obtained by calibrating the external reference conversion relation off line. The invention takes the center of a rail surface where a vehicle body is located as an origin point, faces a living area, is in the positive direction of a y axis on the left side of the horizontal direction, is in the positive direction of an X axis on the height direction, and is in the positive direction of a z axis on the motion direction of the vehicle body to establish a rail plane centering coordinate system and construct a three-dimensional point (X) under a binocular coordinate systemc,Yc,Zc,1)TMid-coordinate system (X) of orbital planew,Yw,Zw,1)TOf (2) an external reference transformation relation Rz,Tz
Figure BDA0002300010390000112
All the section data calculated under the coordinate system of the binocular camera set need to be converted into the orbit plane centering coordinate system for recalculation and fusion.
S2.5: calibrating model parameters;
and estimating the established model parameters by using a calibration method. Firstly, calculating the respective insides of each group of binocular cameras by using infrared lamp calibration plate images under different postures through a Zhang calibration methodGinseng KlAnd KrDistortion parameter (k) of each camera1,k2,k3,p1,p2) And a translation relationship R, T between cameras. Then, the R of each car body camera set under the orbit plane centering coordinate system is calculated through a data pair formed by the measurement result of the artificial profiler and the binocular calculation resultz,TzSimultaneously, the mapping relation between the measurement data of the level meter and the binocular data is utilized, and the R of each vehicle bottom camera set under the rail plane centering coordinate system is calculated through the least square methodz,TzAnd finally forming a calibration file of the whole railway section detection system.
S3: preprocessing laser band images;
before the image data is calculated, preprocessing operations such as image correction, image enhancement, image edge detection and light band center detection are sequentially carried out.
Wherein the image correction is to correct the image shot by each camera by using the calibrated distortion parameters, and each pixel point (u, v,1) of the undistorted imageTFirstly, the coordinate (x, y,1) of the image in the physical coordinate system is calculated by using the conversion relation of (3)T
Figure BDA0002300010390000121
And calculating the coordinates (x) of the distorted point in the image physical coordinate system by using a distortion model for the obtained coordinatesd,yd,1),
xd=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)
yd=y(1+k1r2+k2r4+k3r6)+2p2xy+p1(r2+2y2) (31)
Finally, the distorted point is converted into a pixel coordinate system to obtain a corresponding point (u) on the distorted imaged,vd,1),
Figure BDA0002300010390000122
Because the pixel points are integer points, the calculated coordinates must be interpolated to obtain the final integer coordinates (u)d,vd1) so that for each point (u, v,1) on the undistorted image, a corresponding point (u, v,1) on the undistorted image can be found on the basis of the known distorted imaged,vdAnd 1), namely, the correction of the distorted image can be completed.
S4: performing three-dimensional correction;
due to the instability of the train, the image plane of a binocular camera set assembled on the train cannot be guaranteed to be completely aligned, in order to reduce the problems of precision loss and low efficiency of two-dimensional stereo matching, the binocular camera needs to be subjected to stereo correction, and the completely aligned image plane can be subjected to one-dimensional matching by utilizing epipolar constraint.
And (4) performing stereo correction on the image plane by using a Bouguet algorithm. The method mainly comprises the following steps:
s4.1: constructing a rotation matrix r of left and right camera image planesl,rr
The basic idea of Bouguet is to rotate the left and right images by half each using the R between the binocular cameras, the invention constructs a rotation matrix Rl,rrAs follows below, the following description will be given,
r1=R1/2
rr=R1/2(33)
wherein R is a calibrated external reference conversion relation
S4.2.: construction matrix RrectRealizing polar line alignment;
in order to align the polar lines in parallel, the invention constructs a matrix R transforming the poles to infinity and aligning the polar lines in parallelrect
Figure BDA0002300010390000131
Wherein e1,e2,e3Is constructed as follows
Figure BDA0002300010390000132
Figure BDA0002300010390000133
e3=e1×e2(35)
Wherein T is a calibrated translation vector.
S3.3: carrying out three-dimensional correction;
the left and right image planes are subjected to stereo rectification,
Rl=Rrect·rl
Rr=Rrect·rr
Figure BDA0002300010390000141
Figure BDA0002300010390000142
wherein xl,xrRespectively, points of the image plane in the coordinate system of the camera before correction,
Figure BDA0002300010390000143
and finally, correcting the image plane under the pixel coordinate system through the internal reference relation.
S5: matching light bands;
and (4) carrying out light band matching by utilizing the idea of seed filling. As shown in fig. 3, the main steps are as follows:
s5.1: scanning the light band image to form a seed queue;
scanning light band images from top to bottom, if the left image and the right image of the current line have only one light band point, considering the current matching, adding the current matching into one Group, and if the size of one Group is larger than a threshold value (20), taking the Group as a seed to be added into a queue.
S5.2: extracting seeds and matching;
and after the image is scanned for one time, sequentially extracting seeds from the queue, and expanding the neighborhood of each seed upwards and downwards respectively: if the current row has a plurality of light band points, the closest point is selected, if the distance between the closest point and the seed point is smaller than a threshold value (3 pixels), the closest point is added into the seed, and the scanning filling is continuously carried out upwards and downwards.
S6: three-dimensional calculation;
through the parallax relation between the matching points and the camera internal reference matrix, the two-dimensional points can be re-projected into three dimensions, and the re-projection matrix is as follows:
Figure BDA0002300010390000144
wherein c isx,cyIs the principal point coordinate of the left camera, f is the focal length of the left camera, TxAs is the amount of translation in the x-direction between the two cameras,
Figure BDA0002300010390000145
representing the x coordinate of the principal point in the right image. Given a two-dimensional homogeneous point (x, y,1) on the left image physical coordinate system and its associated disparity d, this point can be projected into three dimensions:
Figure BDA0002300010390000146
the coordinates of the three-dimensional points are
Figure BDA0002300010390000151
Further derived are:
Figure BDA0002300010390000152
the final result of the three-dimensional points is:
Figure BDA0002300010390000153
Figure BDA0002300010390000154
Figure BDA0002300010390000155
s7: converting a coordinate system;
and (4) converting the three-dimensional result into the rail plane centering coordinate system by using the formula (9) through the calibrated R and T from each binocular camera coordinate system to the rail plane centering coordinate system.
S8: correcting the result;
and correcting the result by using a rotation and translation matrix measured by the vehicle body pose estimation system. The vehicle body inclination angle estimated by the vehicle body pose estimation system is theta, the right deviation is delta x, the uncorrected coordinate value of the coordinate system in the orbit plane is (x ', y'), the corrected coordinate value is (x, y), and the correction is carried out by the following formula,
Figure BDA0002300010390000156
and obtaining the corrected intensive limit data at different heights, further obtaining a limit half-width value at the standard height by an interpolation method, and finishing the calculation of the limit data.
In conclusion, the tunnel clearance data measuring method based on stereoscopic vision provided by the invention realizes the rapid and accurate detection of the tunnel clearance by the vehicle-mounted detection platform, and makes up the deficiency of the current tunnel clearance detection.

Claims (1)

1. A tunnel clearance data measuring method based on stereoscopic vision is characterized by comprising the following steps:
s1: designing a railway tunnel detection vehicle carrying an active light source, a binocular camera set and a vehicle body attitude estimation system, so that the railway tunnel detection vehicle has the capability of dynamically acquiring limit data by using a vision system;
the active light source is a near-infrared light source, the binocular camera set is uniformly arranged around the vehicle body in a surrounding mode, the vehicle body attitude estimation system is the binocular camera set, the three-dimensional result of the rail is measured and calculated by a stereoscopic vision method, and the rail three-dimensional result is compared with a standard rail file for estimation;
s2: establishing and calibrating a relative coordinate system;
constructing an image pixel coordinate system, an image physical coordinate system and a camera coordinate system, establishing a projection conversion relation from an image space to a three-dimensional space by utilizing homogeneous expression of coordinate points, and constructing an external parameter in-line model between a camera distortion model and a camera; constructing a rail plane center coordinate system, and establishing an external reference conversion relation from a camera coordinate system to the rail plane center coordinate system; estimating the existing model parameters by a calibration method; the method comprises the following specific steps:
s2.1: based on a camera projection model in a homogeneous coordinate system;
constructing a space point coordinate (X) under a camera coordinate system by taking a left camera of a binocular camera set as a reference pointc,Yc,Zc)TCorresponding to two-dimensional coordinate point (x, y,1) in the image physical coordinate systemTPerspective projection transformation model of (1):
Figure FDA0002300010380000011
wherein f is the physical focal length;
according to the camera imaging model, a coordinate point (x, y,1) in an image physical coordinate system is constructedTTo the image pixel coordinate system corresponding coordinate point (u, v,1)TThe conversion relationship of (1):
Figure FDA0002300010380000012
where dx, dy are the pixel size, cx,cyIs the principal point offset in the x, y direction;
simultaneous (1) and (2) to construct the coordinates (X) of the spatial points in the camera coordinate systemc,Yc,Zc)TWith corresponding image plane pixel homogeneous coordinates (u, v,1)TThe internal reference mapping relationship of (1):
Figure FDA0002300010380000021
wherein f isx=f/dx,fyF/dy is the focal length of the pixel on the x and y scales respectively; setting an XOY plane of a world coordinate system as a calibration object plane, wherein the calibration object plane is a plane with Z being 0, and constructing pixel coordinates (u, v,1)TTo world point coordinate (X)w,Yw,0,1)TProjective transformation relation of (1):
Figure FDA0002300010380000022
wherein R ═ R1r2r3]Is a rotation matrix, t is a translation vector;
s2.2: establishing a lens distortion model;
dividing the image shot by the camera into radial distortion and tangential distortion; the radial distortion is caused by the manufacturing process of the lens, the distortion degree is coupled with the distance from the edge of the lens, and for a two-dimensional point (x, y) in an image physical coordinate system, the following model is established for the coordinate point (x) after the radial distortiondistorted,ydistorted) The description is that:
Figure FDA0002300010380000023
wherein k is1,k2,k3Is a radial distortion factor;
tangential distortion is caused by errors of installation positions of a lens and a CMOS or CCD, and for a two-dimensional point (x, y) in an image physical coordinate system, the following model is established for a radially distorted coordinate point (x)distorted,ydistorted) The description is that:
Figure FDA0002300010380000024
wherein p is1,p2Is a tangential distortion factor;
Therefore, in the simultaneous connection of (3) and (4), the following mathematical relationship is established to describe the whole distortion model:
Figure FDA0002300010380000025
wherein r is2=x2+y2
S2.3: establishing an external parameter inline model of a binocular camera set;
defining the external parameter of the calibrated left camera as Rl,TlAnd the external parameter after the right camera calibration is Rr,TrAnd constructing an external reference conversion relation R, T of two camera planes in the binocular camera set:
Figure FDA0002300010380000031
s2.4: establishing an external reference model of a coordinate system from a binocular system to an orbit plane;
obtaining a result of binocular calculation under an orbit plane centering coordinate system by off-line calibrating the external reference conversion relation; the center of a rail surface where a vehicle body is located is used as an origin point, the center faces a living area, the left horizontal side is the positive direction of a y axis, the height direction is an X axis, the motion direction of the vehicle body is the positive direction of a z axis, a rail plane centering coordinate system is established, and a three-dimensional point (X) under a binocular coordinate system is establishedc,Yc,Zc,1)TMid-coordinate system (X) of orbital planew,Yw,Zw,1)TOf (2) an external reference transformation relation Rz,Tz
Figure FDA0002300010380000032
All section data calculated under the coordinate system of the binocular camera set need to be converted into the orbit plane centering coordinate system for recalculation and fusion;
s2.5: calibrating model parameters;
estimating the established model parameters by using a calibration method: firstly, the methodCalculating respective internal parameters K of each group of binocular cameras by using images of infrared lamp calibration plates in different postures through a Zhang calibration methodlAnd KrDistortion parameter (k) of each camera1,k2,k3,p1,p2) And a translation relationship R, T between cameras; then, the R of each car body camera set under the orbit plane centering coordinate system is calculated through a data pair formed by the measurement result of the artificial profiler and the binocular calculation resultz,TzSimultaneously, the mapping relation between the measurement data of the level meter and the binocular data is utilized, and the R of each vehicle bottom camera set under the rail plane centering coordinate system is calculated through the least square methodz,TzFinally, a calibration file of the whole railway section detection system is formed;
s3: preprocessing laser band images;
before the image data is calculated, preprocessing operations of image correction, image enhancement, image edge detection and light band center detection are carried out in sequence;
wherein the image correction is to correct the image shot by each camera by using the calibrated distortion parameters, and each pixel point (u, v,1) of the undistorted imageTFirstly, the coordinates (x, y,1) of the image in the physical coordinate system are calculated by using the conversion relation of the formula (3)T
Figure FDA0002300010380000041
And calculating the coordinates (x) of the distorted point in the image physical coordinate system by using a distortion model for the obtained coordinatesd,yd,1):
Figure FDA0002300010380000042
Finally, the distorted point is converted into a pixel coordinate system to obtain a corresponding point (u) on the distorted imaged,vd,1):
Figure FDA0002300010380000043
Because the pixel points are integer points, the calculated coordinates must be interpolated to obtain the final integer coordinates (u)d,vd1), under the premise of knowing the distorted image, for each point (u, v,1) on the undistorted image, the corresponding point (u) on the undistorted image can be foundd,vd1), namely, the correction of the distorted image can be completed;
s4: performing three-dimensional correction;
due to the instability of the train, the image plane of a binocular camera set assembled on the train cannot be ensured to be completely aligned, in order to reduce the problems of precision loss and low efficiency of two-dimensional stereo matching, the binocular camera needs to be subjected to stereo correction, and the completely aligned image plane can be subjected to one-dimensional matching by utilizing epipolar constraint; the method for correcting the image plane in the three-dimensional mode by using the Bouguet algorithm mainly comprises the following steps:
s4.1: constructing a rotation matrix r of left and right camera image planesl,rr
The left and right images are rotated by half respectively by utilizing R between the binocular cameras to construct a rotation matrix Rl,rrThe following were used:
Figure FDA0002300010380000044
wherein R is a calibrated external reference conversion relation;
s4.2.: construction matrix RrectRealizing polar line alignment;
constructing a matrix R transforming poles to infinity and aligning polar lines in parallelrect
Figure FDA0002300010380000051
Wherein e1,e2,e3The structure is as follows:
Figure FDA0002300010380000052
wherein T is a calibrated translation vector;
s4.3: carrying out three-dimensional correction;
the left and right image planes are subjected to stereo rectification,
Figure FDA0002300010380000053
wherein xl,xrRespectively, points of the image plane in the coordinate system of the camera before correction,
Figure FDA0002300010380000054
respectively correcting the points of the image plane under the corrected camera coordinate system, and finally correcting the image plane under the pixel coordinate system through the internal reference relation;
s5: matching light bands;
the idea of seed filling is utilized to carry out light band matching, and the detailed steps are as follows:
s5.1: scanning the light band image to form a seed queue;
scanning light band images from top to bottom, if the left image and the right image of the current line have only one light band point, considering the current matching, adding the current matching into one Group, and if the size of one Group is larger than a threshold value (20), taking the Group as a seed to be added into a queue;
s5.2: extracting seeds and matching;
and after the image is scanned for one time, sequentially extracting seeds from the queue, and expanding the neighborhood of each seed upwards and downwards respectively: if the current row has a plurality of light band points, selecting the closest point, if the distance between the closest point and the seed point is less than a threshold value (3 pixels), adding the closest point into the seed, and continuously scanning and filling upwards and downwards;
s6: three-dimensional calculation;
establishing a reprojection matrix Q from four-dimensional information consisting of two-dimensional homogeneous points of images and parallaxes to three-dimensional homogeneous points through the parallax relation between the matching points and the camera internal reference matrix, and realizing the conversion of the three-dimensional points of the captured tunnel wall from a two-dimensional space pixel coordinate system to a three-dimensional space camera coordinate system;
through the parallax relation between the matching points and the camera internal reference matrix, the two-dimensional points can be re-projected into three dimensions, and the re-projection matrix is as follows:
Figure FDA0002300010380000061
wherein c isx,cyIs the principal point coordinate of the left camera, f is the focal length of the left camera, TxAs is the amount of translation in the x-direction between the two cameras,
Figure FDA0002300010380000062
x-coordinates representing the principal point in the right image; given a two-dimensional homogeneous point (x, y,1) on the left image physical coordinate system and its associated disparity d, this point can be projected into three dimensions:
Figure FDA0002300010380000063
the coordinates of the three-dimensional points are
Figure FDA0002300010380000064
The derivation is:
Figure FDA0002300010380000065
the final result of the three-dimensional points is:
Figure FDA0002300010380000066
s7: converting a coordinate system;
converting the three-dimensional result into a rail plane center coordinate system through the calibrated R and T from each binocular camera coordinate system to the rail plane center coordinate system;
s8: correcting the result;
correcting the result through a rotation and translation matrix measured by a vehicle body pose estimation system, and obtaining standard limit data by utilizing interpolation;
correcting the result by using a rotation translation matrix measured by a vehicle body pose estimation system, setting the vehicle body inclination angle estimated by the vehicle body pose estimation system as theta, the right deviation as delta x, the uncorrected coordinate values of a track plane centering coordinate system as (x ', y'), and the corrected coordinate as (x, y), and correcting by the following formula:
Figure FDA0002300010380000071
the corrected intensive limit data under different heights can be obtained, the limit half-width value under the standard height is obtained through an interpolation method, and limit data calculation is completed.
CN201911217935.4A 2019-12-03 2019-12-03 Tunnel limit data measuring method based on stereoscopic vision Active CN111091076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911217935.4A CN111091076B (en) 2019-12-03 2019-12-03 Tunnel limit data measuring method based on stereoscopic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911217935.4A CN111091076B (en) 2019-12-03 2019-12-03 Tunnel limit data measuring method based on stereoscopic vision

Publications (2)

Publication Number Publication Date
CN111091076A true CN111091076A (en) 2020-05-01
CN111091076B CN111091076B (en) 2022-03-11

Family

ID=70393840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911217935.4A Active CN111091076B (en) 2019-12-03 2019-12-03 Tunnel limit data measuring method based on stereoscopic vision

Country Status (1)

Country Link
CN (1) CN111091076B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798478A (en) * 2020-07-07 2020-10-20 重庆大学 Method for measuring icing thickness of front edge of blade of wind driven generator
CN112001379A (en) * 2020-05-19 2020-11-27 西安工程大学 Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision
CN114509048A (en) * 2022-01-20 2022-05-17 中科视捷(南京)科技有限公司 Monocular camera-based overhead transmission line space three-dimensional information acquisition method and system
CN114863026A (en) * 2022-05-18 2022-08-05 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN115880687A (en) * 2023-02-09 2023-03-31 北京东方瑞丰航空技术有限公司 Method, device, equipment and medium for automatically generating infrared characteristics of target object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102602425A (en) * 2012-03-29 2012-07-25 天津大学 Locomotive limiting system and calibration method thereof
JP2016192105A (en) * 2015-03-31 2016-11-10 公益財団法人鉄道総合技術研究所 Stereo image processing method and device therefor
CN110217271A (en) * 2019-05-30 2019-09-10 成都希格玛光电科技有限公司 Fast railway based on image vision invades limit identification monitoring system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102602425A (en) * 2012-03-29 2012-07-25 天津大学 Locomotive limiting system and calibration method thereof
JP2016192105A (en) * 2015-03-31 2016-11-10 公益財団法人鉄道総合技術研究所 Stereo image processing method and device therefor
CN110217271A (en) * 2019-05-30 2019-09-10 成都希格玛光电科技有限公司 Fast railway based on image vision invades limit identification monitoring system and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHANG YIXIN ET AL: "Freight train gauge-exceeding detection based on three-dimensional stereo vision measurement", 《MACHINE VISION AND APPLICATIONS》 *
唐艳丽: "图像处理技术在铁路限界检测中的应用", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
胡庆武等: "基于移动双目视觉的铁路建筑物限界快速自动检测方法", 《铁道学报》 *
高健等: "基于立体视觉的高速列车动态限界测量技术研究", 《铁道技术监督》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001379A (en) * 2020-05-19 2020-11-27 西安工程大学 Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision
CN111798478A (en) * 2020-07-07 2020-10-20 重庆大学 Method for measuring icing thickness of front edge of blade of wind driven generator
CN114509048A (en) * 2022-01-20 2022-05-17 中科视捷(南京)科技有限公司 Monocular camera-based overhead transmission line space three-dimensional information acquisition method and system
CN114509048B (en) * 2022-01-20 2023-11-07 中科视捷(南京)科技有限公司 Overhead transmission line space three-dimensional information acquisition method and system based on monocular camera
CN114863026A (en) * 2022-05-18 2022-08-05 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN115880687A (en) * 2023-02-09 2023-03-31 北京东方瑞丰航空技术有限公司 Method, device, equipment and medium for automatically generating infrared characteristics of target object

Also Published As

Publication number Publication date
CN111091076B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN107945220B (en) Binocular vision-based reconstruction method
CN102364299B (en) Calibration technology for multiple structured light projected three-dimensional profile measuring heads
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN111045017A (en) Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111369630A (en) Method for calibrating multi-line laser radar and camera
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN110487216A (en) A kind of fringe projection 3-D scanning method based on convolutional neural networks
CN105160702A (en) Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN107588721A (en) The measuring method and system of a kind of more sizes of part based on binocular vision
CN110189400B (en) Three-dimensional reconstruction method, three-dimensional reconstruction system, mobile terminal and storage device
CN110288659B (en) Depth imaging and information acquisition method based on binocular vision
CN112762899B (en) Fusion method of laser point cloud and BIM model with video information in visual transformer substation
CN106996748A (en) A kind of wheel footpath measuring method based on binocular vision
CN109465830B (en) Robot monocular stereoscopic vision calibration system and method
CN110378969A (en) A kind of convergence type binocular camera scaling method based on 3D geometrical constraint
CN103852060A (en) Visible light image distance measuring method based on monocular vision
CN113793270A (en) Aerial image geometric correction method based on unmanned aerial vehicle attitude information
CN109920009B (en) Control point detection and management method and device based on two-dimensional code identification
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
CN106705962A (en) Method and system for acquiring navigation data
CN105163065A (en) Traffic speed detecting method based on camera front-end processing
CN116188558B (en) Stereo photogrammetry method based on binocular vision
CN114998399B (en) Heterogeneous optical remote sensing satellite image stereopair preprocessing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant