CN109636903B - Binocular three-dimensional reconstruction method based on jitter - Google Patents

Binocular three-dimensional reconstruction method based on jitter Download PDF

Info

Publication number
CN109636903B
CN109636903B CN201811580453.0A CN201811580453A CN109636903B CN 109636903 B CN109636903 B CN 109636903B CN 201811580453 A CN201811580453 A CN 201811580453A CN 109636903 B CN109636903 B CN 109636903B
Authority
CN
China
Prior art keywords
image
initial
points
camera
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811580453.0A
Other languages
Chinese (zh)
Other versions
CN109636903A (en
Inventor
徐晓
徐顺雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811580453.0A priority Critical patent/CN109636903B/en
Publication of CN109636903A publication Critical patent/CN109636903A/en
Application granted granted Critical
Publication of CN109636903B publication Critical patent/CN109636903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a binocular three-dimensional reconstruction method based on jitter. The method utilizes the shake to restore the three-dimensional image with high precision in the binocular imaging process, and the stable relative position between the cameras is ensured by adopting the mode that the optical plane mirror or the equivalent optical system moves and the cameras are not moved, thereby being beneficial to the system calibration and the improvement of the detection precision; in addition, during the movement, the most dissimilar direction is sought as much as possible to be matched as the movement direction of the approaching of the polar geometric constraint, so that the effectiveness of the matching point which is sought after can be measured in the shortest time and shortest movement stroke; if the original matching point can not meet the matching condition corresponding to the new polar geometric constraint, the original matching point is eliminated, otherwise, the original matching point is retained; and performing three-dimensional reconstruction according to the reserved matching points, so that mismatching and inaccurate matching can be effectively reduced, and high-precision three-dimensional reconstruction is completed.

Description

Binocular three-dimensional reconstruction method based on jitter
Technical Field
The invention relates to the technical field of machine vision and image matching, in particular to a binocular three-dimensional reconstruction method based on jitter.
Background
Binocular three-dimensional reconstruction is a very important area in machine vision. There are various methods for binocular or multi-view three-dimensional reconstruction, and when finding corresponding points in a binocular image, the most widely used method is to extract image feature points and feature lines and then perform feature matching. However, the two-dimensional images are involved in the feature matching method, and due to the existence of binocular parallax, the binocular images are different and only similar, and the images in the neighborhood of the corresponding points are different. The difference causes the corresponding points to be not matched accurately or mismatching is generated by finding mismatching points, so that errors or mistakes occur in three-dimensional restoration.
Disclosure of Invention
The invention aims to provide a binocular three-dimensional reconstruction method based on jitter, which aims to solve the problems of mismatching and inaccurate matching generated by a matching method adopted in the existing binocular vision three-dimensional restoration process.
In order to achieve the purpose, the invention provides the following scheme:
a binocular three-dimensional reconstruction method based on dithering, the method comprising:
step A: horizontally placing two cameras with the same model, wherein the two cameras with the same model comprise a left camera and a right camera; determining a base line and an included angle between an optical axis of the camera and the base line; the baseline is a connection line between the main node of the left camera and the main node of the right camera;
and B: reflecting the target object into the camera field of view using a flat mirror or equivalent optical system; the plane mirror can rotate and translate;
and C: calibrating by adopting a chessboard method to obtain internal and external parameters of the two cameras, and correcting and preprocessing original left and right images shot by the two cameras; selecting a plurality of positions which can be reached by the plane reflector through rotation or translation, and repeatedly carrying out camera external parameter correction work;
step D: obtaining a pair of initial left and right images of the target object at an initial position through a mirror surface of the plane mirror, obtaining matching points between the initial left and right images under the condition of using a polar geometric constraint condition through any image matching algorithm, and calculating space coordinates of corresponding points on the target object from the matching points;
step E: enabling the plane reflector to do rotation and translation or do combined motion of rotation and translation, and acquiring a plurality of left and right images in the motion process;
step F: calculating the coordinates of the matching points calculated in the step D to points or areas adjacent to the points on the newly acquired left and right images; judging whether the corresponding matching points between the left image and the right image meet the matching conditions of the points between the left image and the right image; if yes, keeping the corresponding matching point; if not, the corresponding matching point is used as a mismatching point or an inaccurate point for elimination;
step G: and taking the left corresponding matching points as correctly matched points, and adopting a plane interpolation or fitting method to continue the results of the corresponding matching points to complete three-dimensional reconstruction.
Optionally, step B specifically includes:
selecting a plane reflector or an equivalent optical system, so that the image of the target object enters the two cameras according to a fixed relative geometric relationship through the reflection and refraction of the optical system;
the optical system is driven to move by the rotary translation control system, so that controllable geometric relations are formed between left and right images sampled by the two cameras and between the two cameras and the target object; the movement includes translation and rotation.
Optionally, step D specifically includes:
obtaining a pair of initial left and right images of the target object through the mirror surface of the plane mirror at an initial position;
determining a conversion relation between pixel coordinates and angle coordinates of the initial left and right images; the conversion relation comprises a first conversion relation, a second conversion relation, a third conversion relation and a fourth conversion relation;
converting the pixel coordinates of the initial left and right images into angle coordinates according to the conversion relation; the initial left and right images comprise an initial left image and an initial right image;
determining the characteristic vectors of the gray level images of the initial left and right images under the angle coordinate;
determining the weighted Euclidean distance of corresponding points on the initial left image and the initial right image according to the feature vector;
determining a matching point between the initial left image and the initial right image according to the weighted Euclidean distance and a polar geometric constraint condition;
and calculating the space coordinates of the corresponding points on the target object from the matching points.
Optionally, the converting the pixel coordinates of the initial left and right images into angle coordinates according to the conversion relationship specifically includes:
according to the first conversion relation formula βL=f(XL,YLLR,lRLF, A) and a second conversion relation formula chiL=f(XL,YLLR,lRLF, A) converting the pixel coordinates of the initial left image into angular coordinates, wherein (β)LL) Is the angle coordinate of the initial left image; (X)L,YL) α as the pixel coordinates of the initial left imageLα being the angle between the optical axis of the left camera and the base lineRIs the included angle between the optical axis of the right camera and the base line; lRLIs the length of the baseline; f is the equivalent focal length of the camera; equivalent magnification of A camera; f. ofA first conversion relation between the image pixel coordinate and the angle coordinate of the left camera; f. ofA second conversion relation between the image pixel coordinate and the angle coordinate of the left camera;
according to a third conversion relation formula βR=f(XR,YRLR,lRLF, A) and a fourth conversion relation formula chiR=f(XR,YRLR,lRLF, A) converting the pixel coordinates of the initial right image to angular coordinates, wherein (β)RR) Is the angle coordinate of the initial right image; (X)R,YR) Pixel coordinates of the initial right image; f. ofA third conversion relation between the image pixel coordinate and the angle coordinate of the right camera; f. ofAnd the fourth conversion relation between the image pixel coordinate and the angle coordinate of the right camera.
Optionally, the determining, according to the feature vector, a weighted euclidean distance of a corresponding point on the initial left and right images specifically includes:
according to the characteristic vector, adopting a weighted Euclidean distance calculation formula
Figure BDA0001917753630000031
Determining the weighted Euclidean distance of corresponding points on the initial left image and the initial right image; wherein DLRThe Euclidean distance between corresponding points on the initial left image and the initial right image is taken as the distance; w is aiIs a weighted weight; t isL,iIs a gray scale image I (β) in diagonal coordinatesLL) A feature vector component of (a); t isR,iIs a gray scale image I (β) in diagonal coordinatesRR) A feature vector component of (a); n is the total number of feature vector components.
Optionally, the determining a matching point between the initial left and right images according to the weighted euclidean distance and the polar geometric constraint condition specifically includes:
when the weighted Euclidean distance of the corresponding point on the initial left and right images is smaller than a preset threshold, and the reference value βLAnd said βRSatisfying pole geometry constraints βL=βRWhen β, the corresponding point on the initial left and right images is determined as the matching point between the initial left and right images.
Optionally, step E specifically includes:
enabling the plane reflector to do rotation and translation or do combined motion of rotation and translation, wherein the motion track of the plane reflector ensures that each pair of subsequently acquired left and right images has obviously different polar geometrical relations with the initial left and right images acquired at the beginning;
collecting a plurality of left and right images in the movement process of the plane mirror; under the condition that a left image and a right image are collected, the final result of the selection of the next motion path of the plane mirror is to enable the left image and the right image which are corresponding to the polar geometric constraint condition and obtained by most feature points to be the direction which is judged to be the maximum difference of the image results in the left image and the right image collected in the earlier stage.
Optionally, step F specifically includes:
determining a new coordinate obtained by the matching point moving along with the target object according to the matching point between the initial left image and the initial right image and the mirror surface motion track of the plane mirror;
determining new gray level images of the newly acquired left and right images under the angle coordinate according to the new coordinate;
determining a new feature vector of the new grayscale image;
determining new weighted Euclidean distances of corresponding points on the left image and the right image according to the new feature vectors;
and reserving the corresponding points of which the new weighted Euclidean distance is smaller than a preset threshold value, and discarding the corresponding points of which the new weighted Euclidean distance is larger than or equal to the preset threshold value.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a binocular three-dimensional reconstruction method based on jitter, which utilizes the jitter to restore a three-dimensional image with high precision in the process of binocular imaging, and adopts a mode that an optical plane mirror or an equivalent optical system moves and a camera does not move to ensure that the cameras have stable relative positions, thereby being beneficial to system calibration and improving detection precision; in addition, during the movement, the most dissimilar direction is sought as much as possible to be matched as the movement direction of the approaching of the polar geometric constraint, so that the effectiveness of the matching point which is sought after can be measured in the shortest time and shortest movement stroke; if the original matching point can not meet the matching condition corresponding to the new polar geometric constraint, the original matching point is eliminated, otherwise, the original matching point is retained; and performing three-dimensional reconstruction according to the reserved matching points, so that mismatching and inaccurate matching can be effectively reduced, and high-precision three-dimensional reconstruction is completed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic diagram of a binocular three-dimensional reconstruction system based on dithering according to the present invention;
FIG. 2 is a schematic diagram of a planar rectangular coordinate of a camera image converted to an angular coordinate provided by the present invention;
FIG. 3 is a schematic diagram of a three-dimensional reconstruction method based on dithering according to the present invention calibrated by using a chessboard method;
FIG. 4 is a schematic diagram of how the geometric direction of the feature vector component and the moving direction of the target object are consistent according to the present invention;
wherein fig. 4(a) is a start reference diagram, fig. 4(b) is a left near and right far map of the checkerboard checker, fig. 4(c) is a left far and right near map of the checkerboard checker, and fig. 4(d) is a near baseline map of the checkerboard checker.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a binocular three-dimensional reconstruction method based on jitter, and particularly relates to a method for restoring a three-dimensional image with high precision by using the jitter in a binocular imaging process, so as to solve the problems of mismatching and inaccurate matching generated by a matching method adopted in the existing binocular vision three-dimensional restoration process.
The working principle of the invention is as follows: the matching features of all feature points are calculated from the gray scale or other colorimetry parameters of an image point and a neighborhood point. Due to parallax or the fact that the object reflects light in different directions, the images of corresponding points and the neighborhood of the corresponding points obtained by the two cameras are slightly different. This difference affects the accuracy with which corresponding two points, which are located on two different camera images, respectively, match each other through feature vectors. In other words, not two points feature vectors are the most similar, meaning that the two points are the best match. Therefore, in order to make the matching process reasonable, the matching condition is relaxed in real engineering application. This causes a problem of a large number of mismatching or a low matching accuracy.
As long as images can be collected from different and enough positions, the problem of mismatching can be avoided under the condition that the illumination condition of the three-dimensional object is fixed and the information of the object morphology is sufficiently reflected. However, if in the case where the number of cameras is limited, for non-active light photographing, both the time for photographing an object and the time for which the light is stable are limited; meanwhile, when the cameras move to seek different shooting angles, if the relative relationship between the two cameras is not fixed, complexity is brought to result calculation, and the detection recovery precision is reduced.
Therefore, based on the two points, the invention adopts the movement of the optical plane mirror or the equivalent optical system, but does not move the camera, so that the camera has a stable relative position, which is beneficial to the system calibration and the improvement of the detection precision; further, when the most dissimilar direction is sought as much as possible as the moving direction in which the polar geometric constraint approaches during the movement, the validity of the matching point which has been sought can be measured in as short a time as possible with as short a moving stroke as possible. And if the original matching points cannot meet the matching conditions corresponding to the new polar geometric constraints, the original matching points are eliminated, otherwise, the original matching points are retained.
Based on the working principle of the invention, the aim of the invention is realized by the following technical scheme: firstly, acquiring images at equivalent different positions (namely positions with parallax) by two cameras through a plane mirror or an equivalent optical system, and then solving corresponding points in the images; further, the three-dimensional coordinates of the target object are calculated according to the image matching result; the image is changed through the movement of the plane reflector, and the changed image is collected; according to the changed result, the coordinates of the corresponding points on the original image are corrected, or some mismatching points are eliminated; and finally, restoring the three-dimensional coordinates of the target object according to the elimination result.
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments in order to make the objects, operating principles, technical solutions, technical features and advantages of the present invention more comprehensible.
Fig. 1 is a schematic diagram of a binocular three-dimensional reconstruction system based on dithering according to the present invention. Referring to fig. 1, the binocular three-dimensional reconstruction method based on dithering provided by the invention specifically includes:
step A: horizontally placing two cameras with the same model, wherein the two cameras with the same model comprise a left camera 101 and a right camera 102; determining a base line and an included angle between an optical axis of the camera and the base line; the baseline is a connection line between the main node of the left camera 101 and the main node of the right camera 102.
As shown in fig. 1, a left camera and a right camera of the same model are horizontally placed, so that the field of view of the two cameras is reflected by a plane mirror 103, and a target object 104 can be included in the field of view of the cameras.
Fig. 2 is a schematic diagram of converting a plane rectangular coordinate of a camera image into an angle coordinate provided by the present invention. In fig. 2, LR represents the baseline, where L and R represent the main nodes of the optical systems of the left camera 101 and the right camera 102, respectively, and the baseline LR is a connection line between the main node of the left camera 101 and the main node of the right camera 102. In the experiment, the main node can be simply identified as the center of the front lens surface of the camera (but not limited to this simple identification), the distance between L and R can be measured in the form of a ruler or the like (but not limited to the measurement of the ruler), and the length L of the baseline set by the system can be setRL
O is the midpoint of the baseline LR LA and RA are the optical axes of the left and right cameras, respectively αLAnd αRThe angle between the optical axis of the left and right cameras and the baseline LR, respectively. A plane is defined at point A, L, R, and in the polar geometry (epipolar geometry), is referred to as a polar plane, which is also the reference polar plane defined in this specification. And B is any point on the reference pole plane.
The included angles of LA, RA, LB, RB and baseline LR were αL、αR、χL、χRIt is shown that the angles are marked by starting from the baseline and rotating towards the target point (such as point A, B in fig. 2) from the corresponding line, and the mark is positive.C is a point in space where the target object is not located in the plane ALR, the plane CLR is another polar plane different from the plane ALR, and the angle between the plane CLR and the plane ALR is β, and the mark is marked by starting from OA and locating at the target pointA side up rotation is noted as positive, while the angles ∠ CLR and ∠ CRL are in the polar plane CRL, so similar chi is usedLHexix-RAre denoted by the same reference numerals as (1), and are respectively denoted by xLCHexix-RC
And B: a flat mirror or equivalent optical system 103 is used to reflect the object 104 into the camera field of view; the plane mirror 103 is rotatable and translatable. The equivalent optical system 103 may be, for example, an optical system mainly composed of a roof prism.
As shown in fig. 1, the planar mirror 103 is placed so that it can rotate and translate under the control of a rotational-translational control system 105 (e.g., a stepper motor). And the geometric position of the movement of the plane mirror 103 can be determined by the movement parameters. In other words, the plane mirror 103 is the main body of the motion mechanism, and both rotation and translation thereof are determined according to given motion parameters. The rotation and translation are shown to be sufficiently precise, in other words, the dead-reckoning of the stepping motor employed by the rotational-translational control system 105 is sufficiently small to enable the position of the plane mirror 103 to be sufficiently accurate.
Specifically, the step B includes:
step B1: selecting a plane mirror or an equivalent optical system 103, so that the image of the target 104 enters the two cameras according to a fixed relative geometric relationship through the reflection and refraction of the optical system 103;
step B2: the optical system 103 is driven to move by the rotational translation control system 105, so that controllable geometric relationships are formed between left and right images sampled by the two cameras and between the two cameras and the target 104; the motion comprises translation, rotation and translation and rotation compound motion.
And C: camera calibration and image pre-correction: calibrating by adopting a chessboard method to obtain internal and external parameters of the two cameras, and correcting and preprocessing original left and right images shot by the two cameras; and selecting a plurality of positions which can be reached by the plane reflector through rotation or translation, and repeatedly carrying out camera external parameter correction.
FIG. 3 is a schematic diagram of a three-dimensional reconstruction method based on dithering according to the present invention calibrated by using a checkerboard method. As shown in fig. 3, a checkerboard 301 is placed at the position of the target object, the internal and external parameters of the two cameras are obtained by calibration with a checkerboard method, and the original left and right images obtained by the two cameras are corrected and preprocessed. And the geometric parameters of the measurement system are obtained by measurement, including an accurate correction of the baseline LR parameter in fig. 1.
Rotating and translating the plane mirror 103, again measuring system external parameters (which may require manual calibration and additional calculations because of the large errors of standard commercial calibration procedures), and finally determining the changes in system external parameters corresponding to the path of motion of the system under stepper motor control.
Step D: primary matching and acquisition: at an initial position, a pair of initial left and right images of the object 104 is obtained from the mirror surface of the plane mirror 103, and a matching point between the initial left and right images is obtained by any image matching algorithm under the condition of using a polar geometric constraint condition, and the spatial coordinate of a corresponding point on the object is calculated from the matching point.
Without loss of generality, the coordinates of the camera image are first transformed, and the transformed correspondence is shown in fig. 2 (the plane mirror is removed in fig. 2, and an equivalent optical path is replaced), and the detailed steps and related definitions of step D are as follows:
(1) at the initial position, a pair of initial left and right images of the object 104 is obtained by the mirror surface of the plane mirror 103. The initial left and right images include an initial left image captured by the left camera 101 and an initial right image captured by the right camera 102.
(2) Determining a conversion relation between pixel coordinates and angle coordinates of the initial left and right images; the conversion relation comprises a first conversion relation fThe second conversion relation fThe third conversion relation fAnd a fourth conversion relation f
(3) And converting the pixel coordinates of the initial left and right images into angle coordinates according to the conversion relation.
As shown in fig. 2, a plane is defined by point A, L, R, which is the reference pole plane and corresponds to the position of 0 in the pole geometry. Under the real condition, the right optical axis often cannot be coincided with the RA, so that an equivalent optical axis is required to be used for replacing, and the coordinate of a right image shot by a right camera can be matched with the calculation of the invention only by carrying out corresponding transformation.
By scaling, the image pixel coordinates (denoted as (X), respectively) of the left and right cameras can be determinedL,YL) And (X)R,YR) Left and right camera angle coordinates (noted (β), respectively)LL) And (β)RR) β) among othersLAnd βRIs the result of the calculation at the left and right camera images, respectively, in an angular fashion of angle β shown in fig. 2.
βL=f(XL,YLLR,lRL,f,A) (1)
χL=f(XL,YLLR,lRL,f,A) (2)
βR=f(XR,YRLR,lRL,f,A) (3)
χR=f(XR,YRLR,lRL,f,A) (4)
Wherein (β)LL) Is the angle coordinate of the initial left image; (X)L,YL) α as the pixel coordinates of the initial left imageLα being the angle between the optical axis of the left camera and the base lineRIs the included angle between the optical axis of the right camera and the base line; lRLIs the length of the baseline; f is the equivalent focal length of the camera; equivalent magnification of A camera; f. ofA first conversion relation between the image pixel coordinate and the angle coordinate of the left camera; f. ofA second transformation of the left camera's image pixel coordinates to angular coordinates (β)RR) Is composed ofThe angle coordinates of the initial right image; (X)R,YR) Pixel coordinates of the initial right image; f. ofA third conversion relation between the image pixel coordinate and the angle coordinate of the right camera; f. ofAnd the fourth conversion relation between the image pixel coordinate and the angle coordinate of the right camera.
According to the first conversion relation formula βL=f(XL,YLLR,lRLF, A) and a second conversion relation formula chiL=f(XL,YLLR,lRLF, A) converting the pixel coordinates of the initial left image into angular coordinates according to a third conversion formula βR=f(XR,YRLR,lRLF, A) and a fourth conversion relation formula chiR=f(XR,YRLR,lRLF, A) converts the pixel coordinates of the initial right image to angular coordinates.
(4) And determining the characteristic vectors of the gray images of the initial left and right images under the angle coordinate.
In the embodiment of the invention, for each point on the image, each component of the feature vector of the point can be calculated by a series of template windows containing the point by adopting the gray value or the value of the colorimetry parameter aiming at the point, and then whether the corresponding point in the image is matched or not is calculated by depending on the weighted Euclidean distance between the feature vectors.
In an embodiment of the invention, a gray scale image I (β) in angular coordinates for the initial left and right imagesLL) And I (β)RR) Computing its feature vector T (β)LL)=(TL,1,TL,2,...,TL,12) Or T (β)RR)=(TR,1,TR,2,...,TR,12) The feature vector T (β)LL) Or T (β)RR) The 12 eigenvector components in (1) are respectively along β direction, chi direction, positive 45 degree direction in β chi coordinate system and β chi coordinate systemThe average gray value, the average gray first derivative value, and the average gray second derivative value of the specified window shape in the minus 45 direction, for example, T (β)LL)=(TL,1,TL,2,...,TL,12) In, TL,1、TL,2、TL,3Mean gray value, mean gray first derivative value and mean gray second derivative value of a given window shape along direction β, respectivelyL,4、TL,5、TL,6The mean gray scale value, the mean gray scale first derivative value and the mean gray scale second derivative value of the specified window shape along the x direction (i.e. the direction perpendicular to β), respectively, TL,7、TL,8、TL,9Mean gray value, mean gray first derivative value and mean gray second derivative value of designated window shape along positive 45 deg. direction in β X coordinate system, TL,10、TL,11、TL,12The average gray scale value, the average gray scale first derivative value and the average gray scale second derivative value of the specified window shape along the negative 45 degrees direction in the β x coordinate system respectively, wherein the β x coordinate system is a rectangular coordinate system formed by β as a horizontal coordinate and x as a vertical coordinate.
The points selected to participate in the matching are those with a first derivative T of the mean gray scale along the direction β2(TL,2Or TR,2) In other words, these points are gray maximum or gray minimum points along the direction β.
(5) And determining the weighted Euclidean distance of corresponding points on the initial left image and the initial right image according to the feature vector.
The calculation formula of the weighted Euclidean distance of the corresponding points on the initial left image and the initial right image is as follows:
Figure BDA0001917753630000101
wherein DLRThe Euclidean distance between corresponding points on the initial left image and the initial right image is taken as the distance; w is aiWeighting the preset Euclidean distance; t isL,iIs a gray scale image I (β) in diagonal coordinatesLL) A feature vector component of (a); t isR,iIs a gray scale image I (β) in diagonal coordinatesRR) A feature vector component of (a); n is the total number of feature vector components, and n is 12 in this embodiment.
(6) And determining the matching points between the initial left image and the initial right image according to the weighted Euclidean distance and the polar geometric constraint condition.
From fig. 2 it can be seen that any point C in three-dimensional space, whose projection in the left and right cameras, must satisfy β in angular coordinatesL=βLThis is a polar geometric constraint β, so two points on the image that can be matched must satisfy this constraint.
If the weighted Euclidean distance D of the corresponding point on the initial left and right imagesLRLess than a predetermined threshold, and said βLAnd said βRSatisfying pole geometry constraints βL=βRβ, and other constraints, such as the constraint that brings the three-dimensional object most close to the baseline, are indeed satisfied in addition to the polar geometric constraint, then the corresponding two points in the binocular images may be considered to be matched, i.e., the corresponding points on the initial left and right images may be determined to be the matched points between the initial left and right images.
(7) And calculating the space coordinates of the corresponding points on the target object from the matching points.
Step E: mirror motion and image re-acquisition: the plane mirror 103 is made to make a rotation, translation or a combined motion of rotation and translation, and a plurality of left and right images are collected during the motion process.
However, regardless of the matching algorithm used to match the initial left and right images, mismatched or imprecisely matched points may occur, and the spatial coordinates of the points on the object calculated therefrom may deviate to a large extent from the actual position of the surface of the object, even if errors allowed by the optical and imaging systems are taken into account. Therefore, the method eliminates the points which are mismatched or are not accurately matched in the matching points by adopting a method based on jitter.
Specifically, the step E includes:
step E1: the plane reflector is made to rotate, translate or make a combined motion of rotation and translation, and the motion track of the plane reflector ensures that each pair of subsequently acquired left and right images has a significantly different polar geometric relationship with the initial left and right images acquired at the beginning.
Before the mirror surface of the plane mirror 103 moves, the movement pattern which cannot change the direction β in fig. 2 significantly, i.e. cannot change the direction corresponding to the polar geometrical constraint, is removed. For example, in embodiments of the present invention, the motion pattern that causes the reflected image of the target to exhibit a motion parallel to the baseline LR is eliminated.
Step E2: collecting a plurality of left and right images in the movement process of the plane mirror; under the condition that an image is acquired, the final result of the selection of the next motion path of the plane mirror is to enable the image direction corresponding to the polar geometric constraint condition obtained by most feature points to be the direction which is pre-judged to be the maximum difference of the image result in the previous image.
In addition to β directions, embodiments of the present invention provide a least similar determination in three directions4,T5,T6(vertical β Direction), T7,T8,T9(β x positive 45 ° direction in coordinate system), T10,T11,T12(β x direction minus 45 in the coordinate system.) the greater the sum of the squares of the component differences, the more dissimilar in that direction, i.e., the direction in which the image results differ the mosti(i ═ 1, 2.., 12) denotes TL,iOr TR,iE.g. T4,T5,T6Means TL,4,TL,5,TL,6Or TR,4,TR,5,TR,6
With reference to fig. 2, it can be understood that: for a dual camera, the movement of the mirror surface of the plane mirror is equivalent to the movement of the observation system under the condition that the observed object (target object) is illuminated by the light source. Therefore, when the target object moves to a position that the left side is close to the observation system and the right side is far from the system, for the original characteristic point with beta >0, the current beta is moved to a direction close to minus 45 degrees in the original beta x coordinate; for the original characteristic point of beta <0, the current beta is moved to a direction close to positive 45 degrees in the original beta x coordinate.
β should be added when judging the most dissimilar direction>T of 07,T8,T9Sum of squares of component differences β<T of 010,T11,T12The sum of the squares of the component differences is summed as a criterion for determining whether the system is rotating left-near and right-far β<T of 07,T8,T9Sum of squares of component differences β>T of 010,T11,T12The sum of the squares of the component differences is summed as a criterion for determining whether the system is rotating right-near and left-far for dissimilarity in the direction perpendicular β, the difference may be determined by the component T4,T5,T6Is determined by the sum of the squares of the gaps of the camera baseline LR toward or away from the target to obtain a new polar geometric constraint. The mirror surface of the plane mirror makes corresponding movement according to the least similar judgment and the preset parameters. After the movement is finished, the two cameras finish image acquisition.
Fig. 4 is a schematic diagram of how the geometric direction of the feature vector component and the moving direction of the target object provided by the present invention are consistent. In fig. 4, various cases are shown where a polar plane 401 with β >0 is tangent to a checkerboard checker 402: wherein figure (a) is a starting reference for one case illustration; FIG. b is a diagram showing the case where the same beta direction of the captured image is close to the positive 45-degree direction of the initial reference image beta chi when the checker 402 is close to the left and far from the right; FIG. c shows the case where the direction of the captured image in the same direction β is close to the direction of minus 45 degrees in the initial reference image β χ when the checker 402 is far left and near right; graph (d) shows that the checkerboard checker 402 has changed significantly from the initial reference image in the direction perpendicular to β as it approaches the baseline RL.
Step F: matching points are judged and discarded: calculating the coordinates of the matching points calculated in the step D to points or areas adjacent to the points on the newly acquired left and right images; judging whether the corresponding matching points between the left image and the right image meet the matching conditions of the points between the left image and the right image; if yes, keeping the corresponding matching point; and if not, using the corresponding matching point as a mismatching point or an inaccurate point for eliminating.
Specifically, the step F includes:
step F1: calculating matching point matching basis: the similarity of the feature vectors of corresponding points in a series of first captured images is compared under the current new polar geometry constraint.
After the mirror motion is completed, the coordinates (χ) of the matching points calculated using the initial left and right images acquired earliest are calculatedLRβ), in the case of knowing the motion trajectory of the mirror surface, the matching point (χ) can be calculated from the matching point between the initial left and right images and the motion trajectory of the mirror surface of the plane mirrorLRβ) obtained as a result of the movement of the object to be measured (target object) together with the new coordinates (χ)L',χR', β'), and thus the new coordinate (χ)L',χR', β') can find the corresponding point in the newly acquired left and right images and its new gray scale image I (χ)L', β') and I (χ)R', β'). the new grayscale image I (χ) is calculatedL', β') and I (χ)R', β') as a new feature vector, and then, in accordance with equation (5), the weighted euclidean distance between corresponding points in the newly acquired contributing image pair can be calculated as the new weighted euclidean distance, noting that the result of the new weighted euclidean distance calculation is not the same as that calculated in the oldest initial left and right image pair, since the image has changed.
Step F2: and comparing the comparison result with a preset threshold value, and determining which corresponding points can be reserved and which points do not meet the condition and are discarded through the threshold value.
And comparing the new weighted Euclidean distance with a preset threshold, reserving the corresponding points smaller than the preset threshold, discarding the corresponding points larger than or equal to the preset threshold, and finally, reserving the points which are the accurately matched matching points.
Step G: three-dimensional reconstruction: and taking the left corresponding matching points as correctly matched points, and adopting a plane interpolation or fitting method to continue the results of the corresponding matching points to complete three-dimensional reconstruction.
Under the condition that a target object is not moved, images in the cameras can move in a stable and accurate mode, and meanwhile, the relative positions of the two cameras are not moved, so that external parameter values required by image calculation can be stable and accurate; in addition, the method can quickly supplement the required matching information for image matching in a short stroke, efficiently reduce the mismatching and finish the high-precision three-dimensional reconstruction.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The system disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant part can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A binocular three-dimensional reconstruction method based on jitter is characterized by comprising the following steps:
step A: horizontally placing two cameras with the same model, wherein the two cameras with the same model comprise a left camera and a right camera; determining a base line and an included angle between an optical axis of the camera and the base line; the baseline is a connection line between the main node of the left camera and the main node of the right camera;
and B: reflecting the target object into the camera field of view using a flat mirror or equivalent optical system; the plane mirror can rotate and translate;
and C: calibrating by adopting a chessboard method to obtain internal and external parameters of the two cameras, and correcting and preprocessing original left and right images shot by the two cameras; selecting a position where the plane reflector can reach through rotation or translation, and repeatedly performing camera external parameter correction work;
step D: obtaining a pair of initial left and right images of the target object at an initial position through a mirror surface of the plane mirror, obtaining matching points between the initial left and right images under the condition of using a polar geometric constraint condition through any image matching algorithm, and calculating space coordinates of corresponding points on the target object from the matching points;
step E: enabling the plane reflector to do rotation and translation or do combined motion of rotation and translation, and acquiring left and right images in the motion process;
step F: calculating the coordinates of the matching points calculated in the step D to points or areas adjacent to the points on the newly acquired left and right images; judging whether the corresponding matching points between the left image and the right image meet the matching conditions of the points between the left image and the right image; if yes, keeping the corresponding matching point; if not, the corresponding matching point is used as a mismatching point or an inaccurate point for elimination;
step G: and taking the left corresponding matching points as correctly matched points, and adopting a plane interpolation or fitting method to continue the results of the corresponding matching points to complete three-dimensional reconstruction.
2. The binocular three-dimensional reconstruction method according to claim 1, wherein the step B specifically comprises:
selecting a plane reflector or an equivalent optical system, so that the image of the target object enters the two cameras according to a fixed relative geometric relationship through the reflection and refraction of the optical system;
the optical system is driven to move by the rotary translation control system, so that controllable geometric relations are formed between left and right images sampled by the two cameras and between the two cameras and the target object; the movement includes translation and rotation.
3. The binocular three-dimensional reconstruction method according to claim 1, wherein the step D specifically includes:
obtaining a pair of initial left and right images of the target object through the mirror surface of the plane mirror at an initial position;
determining a conversion relation between pixel coordinates and angle coordinates of the initial left and right images; the conversion relation comprises a first conversion relation, a second conversion relation, a third conversion relation and a fourth conversion relation;
converting the pixel coordinates of the initial left and right images into angle coordinates according to the conversion relation; the initial left and right images comprise an initial left image and an initial right image;
determining the characteristic vectors of the gray level images of the initial left and right images under the angle coordinate;
determining the weighted Euclidean distance of corresponding points on the initial left image and the initial right image according to the feature vector;
determining a matching point between the initial left image and the initial right image according to the weighted Euclidean distance and a polar geometric constraint condition;
and calculating the space coordinates of the corresponding points on the target object from the matching points.
4. The binocular three-dimensional reconstruction method according to claim 3, wherein the converting the pixel coordinates of the initial left and right images into angular coordinates according to the conversion relationship specifically comprises:
according to the first conversion relation formula βL=f(XL,YL,αL,αR,IRLF, A) and a second conversion formula XL=fLX(XL,YL,αL,αR,IRLF, A) converting the pixel coordinates of the initial left image into angular coordinates, wherein (β)L,XL) Is the angle coordinate of the initial left image; (X)L,YL) α as the pixel coordinates of the initial left imageLα being the angle between the optical axis of the left camera and the base lineRIs the included angle between the optical axis of the right camera and the base line; i isRLIs the length of the baseline; f is the equivalent focal length of the camera; a is the equivalent magnification of the camera; f. ofA first conversion relation between the image pixel coordinate and the angle coordinate of the left camera; f. ofLXA second conversion relation between the image pixel coordinate and the angle coordinate of the left camera;
according to a third conversion relation formula βR=f(XR,YR,αL,αR,IRLF, A) and a fourth conversion formula XR=fRX(XR,YR,αL,αR,IRLF, A) converting the pixel coordinates of the initial right image to angular coordinates, wherein (β)R,XR) Is the angle coordinate of the initial right image; (X)R,YR) Pixel coordinates of the initial right image; f. ofA third conversion relation between the image pixel coordinate and the angle coordinate of the right camera; f. ofRXAnd the fourth conversion relation between the image pixel coordinate and the angle coordinate of the right camera.
5. The binocular three-dimensional reconstruction method according to claim 3, wherein the determining the weighted Euclidean distance of the corresponding point on the initial left and right images according to the feature vector specifically comprises:
according to the characteristic vector, adopting a weighted Euclidean distance calculation formula
Figure FDA0002460943670000031
Determining the weighted Euclidean distance of corresponding points on the initial left image and the initial right image; wherein DLRThe Euclidean distance between corresponding points on the initial left image and the initial right image is taken as the distance; w is aiIs a weighted weight; t isL,iIs a gray scale image I (β) in diagonal coordinatesL, χL) A feature vector component of (a); t isR,iIs a gray scale image I (β) in diagonal coordinatesR, χR) A feature vector component of (a); n is the total number of feature vector components.
6. The binocular three-dimensional reconstruction method according to claim 4, wherein the determining the matching points between the initial left and right images according to the weighted Euclidean distance and the polar geometric constraint condition specifically comprises:
when the weighted Euclidean distance of the corresponding point on the initial left and right images is smaller than a preset threshold, and the reference value βLAnd said βRSatisfying pole geometry constraints βL=βRβ, determining that the corresponding point on the initial left and right images is the matching point between the initial left and right images, β is the included angle between the first plane where the base line, the left camera optical axis and the right camera optical axis are located and the base line, and the included angle between one point in the space where the target object is located and the second plane where the left camera optical axis connecting line and the target object are located and the one point in the space where the target object is located and the right camera optical axis connecting line are located, wherein the one point in the space where the target object is located is not located on the first plane.
7. The binocular three-dimensional reconstruction method according to claim 1, wherein the step E specifically includes:
enabling the plane reflector to do rotation and translation or do combined motion of rotation and translation, wherein the motion track of the plane reflector ensures that each pair of subsequently acquired left and right images has obviously different polar geometrical relations with the initially acquired initial left and right images;
collecting left and right images in the movement process of the plane mirror; under the condition that a left image and a right image are collected, the final result of the selection of the next motion path of the plane mirror is to enable the left image and the right image which are corresponding to the polar geometric constraint condition and obtained by most feature points to be the direction which is judged to be the maximum difference of the image results in the left image and the right image collected in the earlier stage.
8. The binocular three-dimensional reconstruction method according to claim 6, wherein the step F specifically comprises:
determining a new coordinate obtained by the matching point moving along with the target object according to the matching point between the initial left image and the initial right image and the mirror surface motion track of the plane mirror;
determining new gray level images of the newly acquired left and right images under the angle coordinate according to the new coordinate;
determining a new feature vector of the new grayscale image;
determining new weighted Euclidean distances of corresponding points on the left image and the right image according to the new feature vectors;
and reserving the corresponding points of which the new weighted Euclidean distance is smaller than a preset threshold value, and discarding the corresponding points of which the new weighted Euclidean distance is larger than or equal to the preset threshold value.
CN201811580453.0A 2018-12-24 2018-12-24 Binocular three-dimensional reconstruction method based on jitter Active CN109636903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811580453.0A CN109636903B (en) 2018-12-24 2018-12-24 Binocular three-dimensional reconstruction method based on jitter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811580453.0A CN109636903B (en) 2018-12-24 2018-12-24 Binocular three-dimensional reconstruction method based on jitter

Publications (2)

Publication Number Publication Date
CN109636903A CN109636903A (en) 2019-04-16
CN109636903B true CN109636903B (en) 2020-09-15

Family

ID=66076748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811580453.0A Active CN109636903B (en) 2018-12-24 2018-12-24 Binocular three-dimensional reconstruction method based on jitter

Country Status (1)

Country Link
CN (1) CN109636903B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899305A (en) * 2020-07-08 2020-11-06 深圳市瑞立视多媒体科技有限公司 Camera automatic calibration optimization method and related system and equipment
CN114608520B (en) * 2021-04-29 2023-06-02 北京石头创新科技有限公司 Ranging method, ranging device, robot and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337094A (en) * 2013-06-14 2013-10-02 西安工业大学 Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN104240294A (en) * 2014-09-28 2014-12-24 华南理工大学 Three-dimensional reconstruction method on basis of binocular single vision field
CN105654547A (en) * 2015-12-23 2016-06-08 中国科学院自动化研究所 Three-dimensional reconstruction method
CN105894574A (en) * 2016-03-30 2016-08-24 清华大学深圳研究生院 Binocular three-dimensional reconstruction method
CN107358631A (en) * 2017-06-27 2017-11-17 大连理工大学 A kind of binocular vision method for reconstructing for taking into account three-dimensional distortion
CN109059873A (en) * 2018-06-08 2018-12-21 上海大学 Underwater 3 D reconstructing device and method based on light field multilayer refraction model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7633048B2 (en) * 2007-04-19 2009-12-15 Simon John Doran Fast laser scanning optical CT apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337094A (en) * 2013-06-14 2013-10-02 西安工业大学 Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN104240294A (en) * 2014-09-28 2014-12-24 华南理工大学 Three-dimensional reconstruction method on basis of binocular single vision field
CN105654547A (en) * 2015-12-23 2016-06-08 中国科学院自动化研究所 Three-dimensional reconstruction method
CN105894574A (en) * 2016-03-30 2016-08-24 清华大学深圳研究生院 Binocular three-dimensional reconstruction method
CN107358631A (en) * 2017-06-27 2017-11-17 大连理工大学 A kind of binocular vision method for reconstructing for taking into account three-dimensional distortion
CN109059873A (en) * 2018-06-08 2018-12-21 上海大学 Underwater 3 D reconstructing device and method based on light field multilayer refraction model

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
3-D reconstruction using mirror images based on a plane symmetry recovering method;Mitsumoto H, Tamura S, Okazaki K, et al.;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;19921231;第941-946页 *
A 3D Reconstruction Method Based on Binocular View Geometry;Liu Z Y, Wang G Q, Wang D P.;《Applied Mechanics and Materials》;20101231;第299-303页 *
基于单目视觉三维重建系统的设计与实现;吕立,姚拓中等;《计算机工程》;20181215;第44卷(第12期);全文 *
基于双目单视面的三维重建;王珊,徐晓;《光学学报》;20170531;第37卷(第5期);全文 *
基于镜像几何约束的单摄像机三维重构;胡春海,刘斌;《中国激光》;20101031;第37卷(第10期);全文 *

Also Published As

Publication number Publication date
CN109636903A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
JP4230525B2 (en) Three-dimensional shape measuring method and apparatus
US8593524B2 (en) Calibrating a camera system
CN112102458A (en) Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN104484648A (en) Variable-viewing angle obstacle detection method for robot based on outline recognition
CN109522935A (en) The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated
CN104677277B (en) A kind of method and system for measuring object geometric attribute or distance
CN111879235A (en) Three-dimensional scanning detection method and system for bent pipe and computer equipment
CN114998499A (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN109636903B (en) Binocular three-dimensional reconstruction method based on jitter
Strelow et al. Precise omnidirectional camera calibration
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
Zeller et al. From the calibration of a light-field camera to direct plenoptic odometry
WO2020199439A1 (en) Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method
CN104794717A (en) Binocular vision system based depth information comparison method
CN116295113A (en) Polarization three-dimensional imaging method integrating fringe projection
CN114219866A (en) Binocular structured light three-dimensional reconstruction method, reconstruction system and reconstruction equipment
Nirmal et al. Homing with stereovision
CN108921936A (en) A kind of underwater laser grating matching and stereo reconstruction method based on ligh field model
WO2019087253A1 (en) Stereo camera calibration method
CN116804537A (en) Binocular range finding system and method
Zhang et al. An overlap-free calibration method for LiDAR-camera platforms based on environmental perception
CN114993207B (en) Three-dimensional reconstruction method based on binocular measurement system
CN113902846B (en) Indoor three-dimensional modeling method based on monocular depth camera and mileage sensor
WO2022118513A1 (en) Position/orientation calculation device, position/orientation calculation method, and surveying device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant