CN112950527B - Stereo matching morphology measurement method based on limited geometric association constraint - Google Patents

Stereo matching morphology measurement method based on limited geometric association constraint Download PDF

Info

Publication number
CN112950527B
CN112950527B CN201911166459.8A CN201911166459A CN112950527B CN 112950527 B CN112950527 B CN 112950527B CN 201911166459 A CN201911166459 A CN 201911166459A CN 112950527 B CN112950527 B CN 112950527B
Authority
CN
China
Prior art keywords
camera
image
equation
matrix
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201911166459.8A
Other languages
Chinese (zh)
Other versions
CN112950527A (en
Inventor
霍炬
张贵阳
杨明
薛牧遥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201911166459.8A priority Critical patent/CN112950527B/en
Publication of CN112950527A publication Critical patent/CN112950527A/en
Application granted granted Critical
Publication of CN112950527B publication Critical patent/CN112950527B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a stereo matching morphology measurement method based on limited geometric association constraint. The method introduces limited geometric association constraint conditions among the stereo cameras from a stereo matching algorithm, can bind a plurality of cameras into one camera, establishes an internal and external parameter collinearity error equation of a plurality of cameras, can complete information processing of a plurality of image pairs in each iteration solving process, effectively reduces the dimensionality of a normalization matrix in the iteration process, and improves the efficiency and the precision of solving the internal and external parameters of the cameras. And then, carrying out stereo matching on the feature points, bringing the obtained limited geometric association constraint into a matching related objective function, and limiting the selected area of the feature points in the image formed under the corresponding camera before searching, thereby effectively reducing the sub-pixel searching range, improving the searching efficiency, ensuring the sub-pixel stereo matching precision and further improving the precision and stability of the visual three-dimensional deformation measurement.

Description

Stereo matching morphology measurement method based on limited geometric association constraint
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a three-dimensional matching morphology measurement method based on limited geometric association constraint, which realizes high-precision measurement of target morphology and deformation by a non-contact measurement mode of stereoscopic vision.
Background
The deformation measurement is widely applied to the fields of material tensile property test, aircraft wing load test, medical image identification, rigid body stress test and the like. In the current vision measurement method for measuring the deformation of a test piece by adopting a stereo camera, two-dimensional matching needs to be carried out on images before and after deformation. And then, acquiring internal and external parameters of the stereo camera, starting image acquisition after the internal and external parameters of the camera are calibrated, and carrying out corresponding matching on the same-name points in the left and right images of the stereo camera by using a stereo matching method. That is, in the whole deformation measurement process, in addition to performing two-dimensional matching on the images before and after deformation by using the correlation function, images acquired by the left and right cameras need to be subjected to stereo matching, so that the spatial three-dimensional coordinate information of the measured point is obtained.
However, in the existing vision measurement technology, most of the vision measurement technologies still use the correlation function as the criterion for judging the similarity between the two sub-regions of the left camera and the right camera to find the corresponding image point when the maximum correlation coefficient is obtained. Such matching methods can present two significant problems: (1) when the stereo camera collects images, due to the existence of parallax, imaging of a test piece in the left camera and the right camera can generate certain distortion, and then a corresponding projection point on a right reference image obtained only through correlation operation and sub-pixel positioning can have larger deviation; (2) the whole area for searching the sub-area similarity is large, the whole target image is searched globally in the stereo matching process, a large amount of operation time is consumed, and meanwhile the matching result cannot be effectively guaranteed. In addition, although some technologies consider that the imaging of the same spatial point on the stereo camera satisfies a certain corresponding relationship, the calculation of the corresponding relationship matrix has a large deviation, so that the stereo matching cannot be guaranteed to achieve a high precision.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a stereo matching topography measuring method based on limited geometric association constraint.
The invention is realized by the following technical scheme, and provides a stereo matching morphology measurement method based on limited geometric association constraint, which comprises the following steps:
step 1, each camera in a vision measurement system acquires images of a tested piece attached with speckles, numbers the images acquired by each camera from different angles, and preprocesses the numbered images;
step 2, performing feature point two-dimensional matching on the preprocessed image, acquiring a reference image before deformation and a target image after deformation of the tested piece through a left camera, performing two-dimensional matching operation through a correlation function, and performing the operation on the right addition;
step 3, solving limited geometric association constraints among the stereo cameras according to a vision measurement system, introducing the limited geometric association constraints into a collinearity error equation, and realizing the optimization solution of internal and external parameters among the multiple cameras;
step 4, carrying out stereo matching on image feature points under different cameras according to the internal and external parameter optimization solving results among the multiple cameras;
and 5, carrying out deformation field fitting according to the stereo matching result, and further finishing the calculation of the deformation.
Further, the step 3 specifically includes:
on the basis of the stereoscopic vision imaging model equation, considering the distortion deviation of the image point coordinates, the collinear equation of the camera optical center, the image point and the space point is constructed as follows:
Figure BDA0002287578970000021
wherein,
Figure BDA0002287578970000022
u, v are image pixel coordinates; u. u0,v0Is the principal point coordinate; f. ofu,fvEquivalent focal lengths in the u and v directions, respectively; (X, Y, Z) are spatial point coordinates; r ═ R (R)1,r2,r3;r4,r5,r6;r7,r8,r9) And T ═ T (T)x,ty,tz)TRotation matrix and translation vector, respectively; Δ x and Δ y are image deviations caused by lens distortion; (x, y) are image physical coordinates; k is a radical of1And k2Is the radial distortion coefficient; ρ is a unit of a gradient1And ρ2Is the tangential distortion coefficient;
then, through Taylor series expansion, taking the first-order partial differential of the equation (1) about the internal and external parameters of the camera to obtain a linear expansion equation of the co-linear equation of the left camera:
Figure BDA0002287578970000023
wherein, DeltaX DeltaY DeltaZ is the coordinate deviation value of the space point in the X, Y and Z directions respectively,
Figure BDA0002287578970000024
representing the partial derivatives of the image points with respect to the intrinsic and extrinsic parameters of the left camera; omegal
Figure BDA0002287578970000025
κlAre all euler angles under the left camera coordinate system,
Figure BDA0002287578970000031
and
Figure BDA0002287578970000032
both represent the correction amount of the internal and external parameters of the camera;
Figure BDA0002287578970000033
representing the partial derivative of the image point with respect to the spatial point coordinates; subscript l denotes parameters under the left camera; (v)u,vv) The partial derivative with respect to the control point approaches zero;
establishing a collinearity equation relative to the right camera, the linear expansion is as follows:
Figure BDA0002287578970000034
combining (2) and (3), establishing a normalized equation containing m images and n space points:
Figure BDA0002287578970000035
wherein, parameter ΠmRepresenting a normalization of image m; psinmA normalization representing a spatial point m on an image n; omeganA regularization representing a spatial point n;
Figure BDA0002287578970000036
and
Figure BDA0002287578970000037
representing the deviation of internal and external parameters of the stereo camera;
Figure BDA0002287578970000038
and
Figure BDA0002287578970000039
representing the spatial point coordinate deviation;
then solving the normal equation to make
Figure BDA00022875789700000310
Equation (4) is simplified as follows:
Figure BDA00022875789700000311
then an iterative error equation for the spatial point is obtained:
Figure BDA0002287578970000041
and the index q represents the iteration times, and the global optimal solution of the internal and external parameters of the camera is obtained according to the error equation (6).
Further, the step 4 specifically includes:
order to
Figure BDA0002287578970000042
Wherein tau isxyzIs a translation vector Tl-rThe position relationship matrix between the stereo cameras is expressed as:
Figure BDA0002287578970000043
wherein, Kl,KrIs an internal parameter matrix of a camera in a vision measurement system; rl-rRepresenting a rotation matrix between the cameras;
then the homogeneous coordinates of the spatial points imaged under the left camera are xil=[ul,vl,1]TThe coordinates of the right camera image are xir=[ur,vr,1]TAnd then the two satisfy:
Figure BDA0002287578970000044
the accuracy of the relation matrix F is ensured by establishing and accurately solving the equation (6), and as can be seen from equation (8), if the image coordinate of a certain camera is known, the image coordinate range of a space point in another camera can be limited only by accurately acquiring the position relation matrix of the other camera relative to the camera;
according to the epipolar geometric constraint principle, the constraint equation of the constraint area is as follows:
y=φ0x+β0 (9)
wherein phi is0Is the slope, beta, of the linear equation determined by the constraint relation0Is the intersection of the straight line and the vertical coordinate axis;
the parameter in equation (9) is developed from equation (8), and then the search range in the vicinity is represented as:
Y(x,β)=φ0x+β(βmin≤β≤βmax) (10)
wherein beta represents the range of the intersection point of the constraint linear equation and the vertical coordinate axis;
the stereo matching correlation function based on the constrained geometric association constraint is:
Figure BDA0002287578970000051
in the formula, phi is the slope of a linear equation under the constraint of limited geometric association, and Win represents the size of a matched window; IniP and DefP represent images to be matched, IniPmAnd DefPmRepresenting the sum of the gray levels of the pixels in the matching window in the corresponding image;
the following solves the sub-pixel matching search by making the parameter matrix P ═ u, ux,uy,v,vx,vy)TWherein u, ux,uy,v,vx,vyRepresenting the displacement in the horizontal direction and the vertical direction and the partial derivatives along the two directions, wherein the parameters in the P matrix are variable parameters to be solved in the matching search process, and the first derivative is solved for the formula (11):
Figure BDA0002287578970000052
wherein x is (x)i,yi,1)TWhere Δ P denotes an increment matrix of the parameter matrix P, Δ IniP denotes an increment of the parameter matrix P in the initial image, Δ DefP denotes an increment of the parameter matrix P in the deformed image, and δ is (x)i-x’i,yi-φx’i-β,1)TS (delta; P) is the displacement resultant;
then S (delta; delta P) is the increment (delta u ) with respect to Px,Δuy,Δv,Δvx,Δvy)TThe matrix of (d) is represented as:
Figure BDA0002287578970000053
expanding IniP (x + S (delta; delta P)) at delta P being 0 by Taylor series, and taking first order to obtain
Figure BDA0002287578970000054
Wherein ^ IniP represents a differential operator of the parameter matrix P increment in the initial image;
united (12) - (14), solving by taking C (Δ P) ═ 0 as an ideal iteration condition:
Figure BDA0002287578970000055
h is a Hessian matrix of S (delta; delta P) to delta P, and M is the number of the participating operation points in the matching window;
Figure BDA0002287578970000056
the resultant displacement iterative update matrix is as follows:
Figure BDA0002287578970000061
wherein the convergence condition of the iteration is as follows:
Figure BDA0002287578970000062
in the formula, peIs the amount of iterative parameter variation, pmIs the convergence threshold.
The invention introduces limited geometric association constraint conditions among the stereo cameras from a stereo matching algorithm, establishes an internal and external parameter collinearity error equation of the multiple cameras, can complete information processing of multiple image pairs in each iteration solving process, effectively reduces the dimensionality of a regularization matrix in the iteration process, equivalently binds the multiple cameras into one camera, and can acquire the relative pose relationship among the stereo cameras with high precision and high speed. After the solution of the spatial collinearity error is completed, a stereo matching mapping equation can be obtained; and then, carrying out stereo matching on the feature points, bringing the obtained limited geometric association constraint into a matching related objective function, and limiting the regions of the feature points in the image formed under the corresponding camera before searching, thereby effectively reducing the sub-pixel searching region, improving the searching efficiency and ensuring the precision and reliability of the sub-pixel stereo matching.
Drawings
FIG. 1 is a diagram of a vision-based distortion measurement system architecture; in the figure, 1 is a world coordinate system, 2 is a test piece to be measured, and 3 is a camera system;
FIG. 2 is a schematic diagram of a target used for camera parameter determination;
FIG. 3 is a schematic diagram of the target circle center extraction result;
FIG. 4 is a three-dimensional morphing flow chart; FIG. 1 is a three-dimensional point before deformation; 2 is the deformed three-dimensional point; 3 is the left camera reference image; 4 is the left camera distortion image; 5 is the right camera reference image; 6 is the right camera distortion image;
FIG. 5 is a schematic diagram of two-dimensional matching of images before and after deformation of a test piece; wherein (a) is a left camera two-dimensional match and (b) is a right camera two-dimensional match;
FIG. 6 is a comparison graph of sub-pixel matching accuracy;
FIG. 7 is a comparison graph of sub-pixel search efficiency;
FIG. 8 is a diagram showing the effect of the method of the present invention before deformation of a test piece;
FIG. 9 is a diagram showing the effect of displacement field measurement after deformation of a test piece;
FIG. 10 is a graph showing the effect of strain field measurement after deformation of a test piece.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With reference to fig. 1, the present invention provides a stereo matching topography measurement method based on constrained geometric association constraint, the method includes the following steps:
step 1, each camera in a vision measurement system acquires images of a tested piece attached with speckles, and numbers the images acquired by each camera from different angles, wherein the numbered images need to be preprocessed due to the influence of factors such as noise, environmental interference and the like; the number of the cameras can be determined according to the size of the test piece, and the preprocessing comprises filtering, denoising, image enhancement and the like;
step 2, performing feature point two-dimensional matching on the preprocessed image, as shown in fig. 4, acquiring a reference image before deformation of the tested piece and a target image after deformation through a left camera, then performing two-dimensional matching operation through a normalized correlation function, and similarly performing the operation on right addition; the matching effect of the sampling points is shown in fig. 5.
Step 3, building a hardware architecture of the vision measurement system shown in fig. 1, if the cameras are fixed on a camera frame, then in the subsequent whole deformation measurement process, the relative positions of the cameras do not need to be adjusted, and a hardware target shown in fig. 2 is adopted as coordinate input for parameter solution of the stereo cameras, so that the circle center coordinates on the target need to be extracted, in the embodiment of the invention, the circle center extraction result is shown in fig. 3, the limited geometric association constraint between the stereo cameras is solved according to the vision measurement system, and the limited geometric association constraint is introduced into a collinearity error equation, so that the optimized solution of the internal and external parameters among the multiple cameras is realized; the internal parameters include: focal length, principal point, distortion coefficient, etc.; the extrinsic parameters include a rotation and translation matrix between the two cameras; the solved parameters can effectively reduce the sub-pixel search range in the stereo matching, improve the matching search efficiency and ensure the reliability of the matching precision;
step 4, carrying out stereo matching on image feature points under different cameras according to the internal and external parameter optimization solving results among the multiple cameras;
and 5, performing deformation field fitting according to the stereo matching result, and solving a strain parameter matrix P ═ u, ux,uy,v,vx,vy)TAnd then, the parameters in the P matrix are all variables to be solved in the matching search process, and then the deformation field data are fitted, so that the calculation of the deformation is completed.
The step 3 specifically comprises the following steps:
on the basis of the stereoscopic vision imaging model equation, considering the distortion deviation of the image point coordinates, the collinear equation of the camera optical center, the image point and the space point is constructed as follows:
Figure BDA0002287578970000071
wherein,
Figure BDA0002287578970000081
u, v are image pixel coordinates; u. of0,v0Is the principal point coordinate; f. ofu,fvEquivalent focal lengths in the u and v directions, respectively; (X, Y, Z) are spatial point coordinates; r ═ R (R)1,r2,r3;r4,r5,r6;r7,r8,r9) And T ═ T (T)x,ty,tz)TRotation matrix and translation vector, respectively; Δ x and Δ y are image deviations caused by lens distortion; (x, y) are image physical coordinates; k is a radical of1And k2Is the radial distortion coefficient; rho1And ρ2Is the tangential distortion coefficient;
then, through Taylor series expansion, taking the first partial differential of the equation (1) about the internal and external parameters of the camera to obtain a linear expansion equation of the collinearity equation of the left camera:
Figure BDA0002287578970000082
wherein, the delta X, the delta Y and the delta Z are coordinate deviation values of the space points in the X direction, the Y direction and the Z direction respectively,
Figure BDA0002287578970000083
representing the partial derivatives of the image points with respect to the intrinsic and extrinsic parameters of the left camera; omegal
Figure BDA0002287578970000084
κlAre all euler angles under the left camera coordinate system,
Figure BDA0002287578970000085
and
Figure BDA0002287578970000086
both represent the correction amount of the internal and external parameters of the camera;
Figure BDA0002287578970000087
representing the partial derivative of the image point with respect to the spatial point coordinates; subscript l denotes parameters under the left camera; (v)u,vv) The partial derivative with respect to the control point approaches zero;
establishing a collinearity equation relative to the right camera, the linear expansion is as follows:
Figure BDA0002287578970000088
Figure BDA0002287578970000089
the correction values of the internal and external parameters of the camera are all adopted, and the meaning of each parameter in the formula (3) is the same as that in the formula (2);
and (3) establishing a normal equation containing m images and n space points in a joint mode:
Figure BDA0002287578970000091
wherein, parameter ΠmRepresenting a normalization of image m; psinmA normalization representing a spatial point m on an image n; omeganA normalization representing a spatial point n;
Figure BDA0002287578970000092
and
Figure BDA0002287578970000093
representing the deviation of internal and external parameters of the stereo camera;
Figure BDA0002287578970000094
and
Figure BDA0002287578970000095
representing the spatial point coordinate deviation;
in this embodiment, the target in fig. 2 has 22 circles, and the left and right cameras acquire 15 images, so that the equation includes 30 images and 22 spatial points.
Then solving the normal equation to make
Figure BDA0002287578970000096
Equation (4) is simplified as follows:
Figure BDA0002287578970000097
then an iterative error equation for the spatial point is obtained:
Figure BDA0002287578970000098
wherein the index q represents the iteration times, and the global optimal solution of the internal and external parameters of the camera is obtained according to the error equation (6), so that the rotation matrix R between the cameras is obtainedl-rAnd translation vector Tl-r
Figure BDA0002287578970000101
Translation vector
Figure BDA0002287578970000102
The step 4 specifically comprises the following steps:
order to
Figure BDA0002287578970000103
Wherein τ isxyzIs a translation vector Tl-rThe position relationship matrix between the stereo cameras is expressed as:
Figure BDA0002287578970000104
wherein, Kl,KrIs an internal parameter matrix of a camera in the vision measuring system; rl-rRepresenting a rotation matrix between the cameras;
to obtain
Figure BDA0002287578970000105
Then the homogeneous coordinates of the spatial points imaged under the left camera are xil=[ul,vl,1]TThe coordinates of the right camera image are xir=[ur,vr,1]TAnd then the two satisfy:
Figure BDA0002287578970000106
then, the circle center extraction result in FIG. 3 is substituted into
Figure BDA0002287578970000107
The accuracy of the relation matrix F is ensured by establishing and accurately solving the equation (6), and as can be seen from equation (8), if the image coordinate of a certain camera is known, the image coordinate range of a space point in another camera can be limited only by accurately acquiring the position relation matrix of the other camera relative to the camera;
according to the epipolar geometric constraint principle, the constraint equation of the constraint area is as follows:
y=φ0x+β0 (9)
wherein phi is0Is the slope, beta, of the linear equation determined by the constraint relation0Is the intersection of the straight line and the vertical coordinate axis;
the parameter in equation (9) is developed from equation (8), and then the search range in the vicinity is represented as:
Y(x,β)=φ0x+β(βmin≤β≤βmax) (10)
wherein beta represents the range of the intersection point of the constraint linear equation and the vertical coordinate axis;
the stereo matching correlation function based on the constrained geometric association constraint is:
Figure BDA0002287578970000111
in the formula, phi is the slope of a linear equation under the constraint of limited geometric association, and Win represents the size of a matched window; IniP and DefP represent images to be matched, IniPmAnd DefPmRepresenting the sum of the gray levels of the pixels in the matching window in the corresponding image;
the following solves for the sub-pixel matching search, making the parameter matrix P (u, u) equal tox,uy,v,vx,vy)TWherein u, ux,uy,v,vx,vyRepresenting the displacement in the horizontal direction and the vertical direction and the partial derivatives along the two directions, wherein the parameters in the P matrix are variable parameters to be solved in the matching search process, and the first derivative is solved for the formula (11):
Figure BDA0002287578970000112
wherein x is (x)i,yi,1)TWhere Δ P denotes an increment matrix of the parameter matrix P, Δ IniP denotes an increment of the parameter matrix P in the initial image, Δ DefP denotes an increment of the parameter matrix P in the deformed image, and δ is (x)i-x’i,yi-φx’i-β,1)TThe computation amount in the iteration process can be effectively reduced, and S (delta; P) is the displacement resultant;
then S (delta; delta P) is the increment (delta u ) with respect to Px,Δuy,Δv,Δvx,Δvy)TThe matrix of (d) is represented as:
Figure BDA0002287578970000113
expanding IniP (x + S (delta; delta P)) at delta P being 0 by Taylor series, and taking first order to obtain
Figure BDA0002287578970000114
Wherein ^ IniP represents a differential operator of the parameter matrix P increment in the initial image;
simultaneous type (12) - (14), solving by taking ℃ (Δ P) ═ 0 as an ideal iteration condition:
Figure BDA0002287578970000115
h is a Hessian matrix of S (delta; delta P) to delta P, and M is the number of the participating operation points in the matching window;
Figure BDA0002287578970000116
due to the fact that the displacement and strain amount P is equal to (u, u)x,uy,v,vx,vy)TIn the solution of (2), the Hessian matrix needs to be updated continuously, but regional updating is changed into updating of a range near a straight line, so that the operation speed is greatly improved. The resultant displacement iterative update matrix is as follows:
Figure BDA0002287578970000121
wherein the convergence condition of the iteration is as follows:
Figure BDA0002287578970000122
in the formula, peIs the amount of iterative parameter variation, pmIs the convergence threshold.
In the present invention, p ismThe upper limit is set to 0.0001; the maximum number of iterations is set to 50.
Through the obtained displacement and strain information, the parameters related to the deformation amount can be obtained, but the test piece needs to be fitted in order to show the appearance of the test piece before and after deformation. A pair of points within the selected area may have coordinates noted as (x) in order1,y1,z1) And (x)1+dx,x1+dy,z1+ dz) and the displacements of the two points after deformation are (u) respectively1,v1,w1) And (u)2,v2,w2) And performing full-field deformation fitting according to the relation between the strain component and the displacement component in the formula (19).
Figure BDA0002287578970000123
The method of the invention is used for accurately measuring the three-dimensional deformation of the test piece. Fig. 8 is an effect graph before the test piece is deformed, fig. 9 is an effect graph after the test piece is deformed, and fig. 10 is a fitting effect graph of a strain field.
The method introduces limited geometric association constraint conditions among the stereo cameras from a stereo matching algorithm, establishes an internal and external parameter collinearity error equation of the multiple cameras, can complete information processing of multiple image pairs in each iteration solving process, is equivalent to binding the multiple cameras into one camera, effectively reduces the dimension of a normalization matrix in the iteration process, and improves the efficiency and the precision of solving the internal and external parameters of the cameras. Then solving to obtain a stereo matching mapping equation by establishing a spatial collinearity error equation; when the feature point stereo matching is carried out, the region of the feature point in the image formed under the corresponding camera can be limited before searching, the sub-pixel searching region is effectively reduced, the searching time is reduced, the searching efficiency is improved, and the sub-pixel stereo matching precision is ensured, so that the precision and the stability of the visual three-dimensional deformation measurement are improved. As can be seen from FIG. 6, compared with the conventional PGGM, the IC-GN method provided by the invention has obvious precision improvement; meanwhile, as can be seen from fig. 7, compared with the IC-GN algorithm with a higher operation speed, the method provided by the present invention still has a higher execution efficiency, and the stronger the noise is, the larger the data volume is, and the more obvious the advantages of the method provided by the present invention are.
The stereo matching morphology measurement method based on the constrained geometric association constraint is introduced in detail, and a specific example is applied in the method to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (2)

1. A stereo matching morphology measurement method based on limited geometric association constraint is characterized in that: the method comprises the following steps:
step 1, each camera in a vision measurement system acquires images of a tested piece attached with speckles, numbers the images acquired by each camera from different angles, and preprocesses the numbered images;
step 2, performing two-dimensional matching on the feature points of the preprocessed image, acquiring a reference image before deformation and a target image after deformation of the tested piece through a left camera, performing two-dimensional matching operation through a correlation function, and similarly performing the operation on a right camera;
step 3, solving limited geometric association constraints among the stereo cameras according to a vision measurement system, introducing the limited geometric association constraints into a collinearity error equation, and realizing the optimal solution of internal and external parameters among the multiple cameras;
step 4, carrying out stereo matching on image feature points under different cameras according to the internal and external parameter optimization solving results among the multiple cameras;
step 5, carrying out deformation field fitting according to the stereo matching result, and further completing the calculation of the deformation;
the step 3 specifically comprises the following steps:
on the basis of the stereoscopic vision imaging model equation, considering the distortion deviation of the image point coordinates, the collinearity equation of the camera optical center, the image point and the space point is constructed as follows:
Figure FDA0003628049880000011
wherein,
Figure FDA0003628049880000012
u, v are image pixel coordinates; u. of0,v0Is the principal point coordinate; f. ofu,fvEquivalent focal lengths in the u and v directions respectively; (X, Y, Z) are spatial point coordinates; r ═ R (R)1,r2,r3;r4,r5,r6;r7,r8,r9) And T ═ T (T)x,ty,tz)TRotation matrix and translation vector, respectively; Δ x and Δ y are image deviations caused by lens distortion; (x, y) are image physical coordinates; k is a radical of1And k2Is the radial distortion coefficient; rho1And ρ2Is the tangential distortion coefficient;
then, through Taylor series expansion, taking the first partial differential of the equation (1) about the internal and external parameters of the camera to obtain a linear expansion equation of the collinearity equation of the left camera:
Figure FDA0003628049880000013
wherein, DeltaX DeltaY DeltaZ is the coordinate deviation value of the space point in the X, Y and Z directions respectively,
Figure FDA0003628049880000021
representing the partial derivatives of the image points with respect to the intrinsic and extrinsic parameters of the left camera; omegal
Figure FDA0003628049880000022
κlAre all euler angles under the left camera coordinate system,
Figure FDA0003628049880000023
and
Figure FDA0003628049880000024
both represent the correction amount of the internal and external parameters of the camera;
Figure FDA0003628049880000025
representing the partial derivative of the image point with respect to the spatial point coordinates; subscript l denotes parameters under the left camera; (v)u,vv) The partial derivative with respect to the control point approaches zero;
establishing a collinearity equation relative to the right camera, the linear expansion is as follows:
Figure FDA0003628049880000026
combining (2) and (3), establishing a normalized equation containing m images and n space points:
Figure FDA0003628049880000027
wherein, parameter ΠmRepresenting a normalization of image m; psinmA normalization representing a spatial point m on an image n; omeganA regularization representing a spatial point n;
Figure FDA0003628049880000028
and
Figure FDA0003628049880000029
representing the deviation of internal and external parameters of the stereo camera;
Figure FDA00036280498800000210
and
Figure FDA00036280498800000211
representing the spatial point coordinate deviation;
then solving the equation of normal form to make
Figure FDA00036280498800000212
Equation (4) is simplified as follows:
Figure FDA0003628049880000031
then an iterative error equation for the spatial point is obtained:
Figure FDA0003628049880000032
and the index q represents iteration times, and a global optimal solution of the internal and external parameters of the camera is obtained according to an error equation (6).
2. The method of claim 1, wherein: the step 4 specifically comprises the following steps:
order to
Figure FDA0003628049880000033
Wherein tau isxyzIs a translation vector Tl-rThe position relationship matrix between the stereo cameras is expressed as:
Figure FDA0003628049880000034
wherein, Kl,KrIs an internal parameter matrix of a camera in a vision measurement system; rl-rRepresenting a rotation matrix between the cameras;
then the homogeneous coordinates of the spatial point imaged under the left camera are xi ifl=[ul,vl,1]TThe coordinates of the right camera image are xir=[ur,vr,1]TAnd then the two satisfy:
Figure FDA0003628049880000035
the accuracy of the relation matrix F is ensured by establishing and accurately solving the equation (6), and as can be seen from equation (8), if the image coordinate of a certain camera is known, the image coordinate range of a space point in another camera can be limited only by accurately acquiring the position relation matrix of the other camera relative to the camera;
according to the epipolar geometric constraint principle, the constraint equation of the constraint area is as follows:
y=φ0x+β0 (9)
wherein phi is0Is the slope, beta, of the linear equation determined by the constraint relation0Is the intersection of the straight line and the vertical coordinate axis;
the parameter in equation (9) is developed from equation (8), and then the search range in the vicinity is represented as:
Y(x,β)=φ0x+β(βmin≤β≤βmax) (10)
wherein beta represents the range of the intersection point of the constraint linear equation and the vertical coordinate axis;
the stereo matching correlation function based on the constrained geometric association constraint is:
Figure FDA0003628049880000041
in the formula, phi is the slope of a linear equation under the constraint of limited geometric association, and Win represents the size of a matched window; IniP and DefP represent images to be matched, IniPmAnd DefPmRepresenting the sum of the gray levels of the pixels in the matching window in the corresponding image;
the following solves for the sub-pixel matching search, making the parameter matrix P (u, u) equal tox,uy,v,vx,vy)TWherein u, ux,uy,v,vx,vyIndicating horizontal and verticalThe displacement in the direction and the partial derivatives along the two directions, the parameters in the P matrix are all variable parameters to be solved in the matching search process, and a first derivative is solved for equation (11):
Figure FDA0003628049880000042
wherein x is (x)i,yi,1)TWhere Δ P denotes an increment matrix of the parameter matrix P, Δ IniP denotes an increment of the parameter matrix P in the initial image, Δ DefP denotes an increment of the parameter matrix P in the deformed image, and δ is (x)i-x′i,yi-φx′i-β,1)TS (delta; P) is the displacement resultant;
then S (delta; delta P) is the increment (delta u ) with respect to Px,Δuy,Δv,Δvx,Δvy)TIs represented as:
Figure FDA0003628049880000043
expanding IniP (x + S (delta; delta P)) at delta P being 0 by Taylor series, and taking first order to obtain
Figure FDA0003628049880000044
Wherein
Figure FDA0003628049880000045
A differential operator representing the increment of the parameter matrix P in the initial image;
united type (12) - (14) composed of
Figure FDA0003628049880000046
Solving for ideal iteration conditions:
Figure FDA0003628049880000047
h is a Hessian matrix of S (delta; delta P) to delta P, and M is the number of the participating operation points in the matching window;
Figure FDA0003628049880000051
the resultant displacement iterative update matrix is as follows:
Figure FDA0003628049880000052
wherein the convergence condition of the iteration is as follows:
Figure FDA0003628049880000053
in the formula, peIs the amount of iterative parameter variation, pmIs the convergence threshold.
CN201911166459.8A 2019-11-25 2019-11-25 Stereo matching morphology measurement method based on limited geometric association constraint Expired - Fee Related CN112950527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911166459.8A CN112950527B (en) 2019-11-25 2019-11-25 Stereo matching morphology measurement method based on limited geometric association constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911166459.8A CN112950527B (en) 2019-11-25 2019-11-25 Stereo matching morphology measurement method based on limited geometric association constraint

Publications (2)

Publication Number Publication Date
CN112950527A CN112950527A (en) 2021-06-11
CN112950527B true CN112950527B (en) 2022-06-14

Family

ID=76224818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911166459.8A Expired - Fee Related CN112950527B (en) 2019-11-25 2019-11-25 Stereo matching morphology measurement method based on limited geometric association constraint

Country Status (1)

Country Link
CN (1) CN112950527B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409404B (en) * 2021-06-29 2023-06-16 常熟理工学院 CUDA architecture parallel optimization three-dimensional deformation measurement method based on novel correlation function constraint
CN117975067B (en) * 2024-03-29 2024-06-25 长春师范大学 High-precision image stereo matching method based on image space information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694375A (en) * 2009-10-23 2010-04-14 北京航空航天大学 Stereoscopic vision detecting method for measuring three-dimensional morphology on strong reflection surface
CN109360246A (en) * 2018-11-02 2019-02-19 哈尔滨工业大学 Stereo vision three-dimensional displacement measurement method based on synchronous sub-district search
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
US10388029B1 (en) * 2017-09-07 2019-08-20 Northrop Grumman Systems Corporation Multi-sensor pose-estimate system
CN110189375A (en) * 2019-06-26 2019-08-30 中国科学院光电技术研究所 A kind of images steganalysis method based on monocular vision measurement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694375A (en) * 2009-10-23 2010-04-14 北京航空航天大学 Stereoscopic vision detecting method for measuring three-dimensional morphology on strong reflection surface
US10388029B1 (en) * 2017-09-07 2019-08-20 Northrop Grumman Systems Corporation Multi-sensor pose-estimate system
CN109360246A (en) * 2018-11-02 2019-02-19 哈尔滨工业大学 Stereo vision three-dimensional displacement measurement method based on synchronous sub-district search
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN110189375A (en) * 2019-06-26 2019-08-30 中国科学院光电技术研究所 A kind of images steganalysis method based on monocular vision measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于立体视觉的飞行器物理仿真动态多信息测量方法研究;张贵阳;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;20220215;第C031-2页 *

Also Published As

Publication number Publication date
CN112950527A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN111414798B (en) Head posture detection method and system based on RGB-D image
CN106408609B (en) A kind of parallel institution end movement position and posture detection method based on binocular vision
CN108648240B (en) Non-overlapping view field camera attitude calibration method based on point cloud feature map registration
CN107063228B (en) Target attitude calculation method based on binocular vision
CN104732518B (en) A kind of PTAM improved methods based on intelligent robot terrain surface specifications
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN107240129A (en) Object and indoor small scene based on RGB D camera datas recover and modeling method
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN110021039A (en) The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint
CN108921895A (en) A kind of sensor relative pose estimation method
CN112950527B (en) Stereo matching morphology measurement method based on limited geometric association constraint
CN109579695A (en) A kind of parts measurement method based on isomery stereoscopic vision
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
Eichhardt et al. Affine correspondences between central cameras for rapid relative pose estimation
CN113538569A (en) Weak texture object pose estimation method and system
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras
CN113642397B (en) Object length measurement method based on mobile phone video
Yao et al. Robust Harris corner matching based on the quasi-homography transform and self-adaptive window for wide-baseline stereo images
CN111415378B (en) Image registration method for automobile glass detection and automobile glass detection method
CN106228593B (en) A kind of image dense Stereo Matching method
CN114399547B (en) Monocular SLAM robust initialization method based on multiframe
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system
CN114877826B (en) Binocular stereo matching three-dimensional measurement method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220614