CN115272080A - Global deformation measurement method and system based on image stitching - Google Patents

Global deformation measurement method and system based on image stitching Download PDF

Info

Publication number
CN115272080A
CN115272080A CN202210933752.8A CN202210933752A CN115272080A CN 115272080 A CN115272080 A CN 115272080A CN 202210933752 A CN202210933752 A CN 202210933752A CN 115272080 A CN115272080 A CN 115272080A
Authority
CN
China
Prior art keywords
point
camera
coordinate system
points
triangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210933752.8A
Other languages
Chinese (zh)
Inventor
何霁
任恩圳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202210933752.8A priority Critical patent/CN115272080A/en
Publication of CN115272080A publication Critical patent/CN115272080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a global deformation measuring method and a system based on image splicing, which comprises the following steps: step S1: arranging a camera around an object to be detected, and calculating and correcting internal and external parameters of the camera, including distortion parameters; step S2: calibrating DLT parameters of each camera to realize automatic combination of data of all cameras to the same coordinate system; and step S3: matching the acquired images, and generating a triangular mesh according to points in the ROI; and step S4: for each group of camera pairs, performing three-dimensional reconstruction under the same coordinate system, and splicing the point clouds obtained by different camera pairs through a heuristic algorithm to generate a global point cloud; step S5: and deriving fields of interest quantities according to the three-dimensional coordinates of the vertexes of the triangular meshes in the global point cloud at different moments, and performing visual display. According to the invention, the obtained point clouds are spliced by a plurality of cameras, so that the global point clouds are obtained and are subjected to corresponding post-processing, and strain measurement under a large field of view is realized.

Description

Global deformation measurement method and system based on image stitching
Technical Field
The invention relates to the technical field of computer vision and metrology, in particular to a global deformation measuring method and system based on image stitching.
Background
Digital Image Correlation (DIC) is an image-based non-contact optical measurement method for measuring the constantly changing coordinates of an object surface, the measured coordinate field being used to further derive fields of interest for displacement, strain and velocity. As long as the object surface has a suitable speckle pattern, the shape, motion and deformation of almost any object can be measured, even under extreme experimental conditions. DIC technology is essentially an image processing technology that, in addition to its non-contact, full-field measurement capabilities, has several unique features, such as simple, inexpensive experimental setup, easy implementation, and robustness. Theoretically, whatever imaging modality is used, measurements can be made using DIC techniques, provided that the images have significant intensity variations and unique correspondences to points on the object surface. Indeed, DIC technology has been applied to common metals, polymeric materials, composites, biological tissues and earth surface deformations, from a few microns (e.g. fibers) to tens of kilometers (e.g. ground deformations) can be used.
The local 2D-DIC defines a subset around an interest point in the reference image from which a displacement of the interest point is calculated. In 3D-DIC, a binocular camera takes a picture of an object from different angles and records a sequence of images acquired by the left and right cameras. Next, the ROI regions in the two stereo views when the object is undeformed are correlated (spatial correlation) using a 2D-DIC and the points are tracked in the sequence of stereo images after the object is deformed (temporal correlation), and then the correlated set of image points is used to reconstruct and track the change over time of the points in the ROI region of the object.
In recent decades, much work has been done to improve the computational performance of DIC algorithms, to plan DIC execution criteria, to evaluate measurement errors, and to extend the original 2D-DIC to 3D-DIC. However, the 3D-DIC can measure three-dimensional coordinates of the surface of an object, but is limited by the field of view, and can measure only three-dimensional coordinates of a partial surface of an object, and it is difficult to achieve global measurement for large objects such as ships and rockets.
Patent document CN111043978B (application number: CN 201911204387.1) discloses a multi-view DIC deformation field measuring apparatus and method, the apparatus includes: the device comprises an environment simulator, a measuring bracket, a piece to be tested, two CCD detectors, a view field adjusting structure, a measuring baseline rod, a standard contrast test piece, a bottom plate, an integrated circular protective cover, an image acquisition and processing computer and a protective cover measuring and controlling device; the integrated circular protective cover is connected with a measuring bracket, and the measuring bracket is arranged on the bottom plate; two ends of the view field adjusting structure are respectively connected with the integrated circular protective cover and the measuring baseline rod; the two CCD detectors are respectively arranged at two ends of the measurement baseline rod; the test piece to be tested and the standard comparison test piece are respectively arranged on the bottom plate; the CCD detector and the integrated circular protective cover are respectively connected with the image acquisition and processing computer and the protective cover measuring and controlling device through wires/transmission pipelines. This patent acquires the image of measured object surface everywhere in proper order through rotatory binocular camera, consequently can't realize the measurement to object in the deformation. According to the invention, a plurality of cameras are simultaneously used and are respectively arranged around the object according to requirements, each group of camera pairs can form a sub-3D-DIC system, all the sub-3D-DIC systems simultaneously acquire an object image in each deformation stage, three-dimensional point clouds are calculated through an algorithm, all the three-dimensional point clouds are spliced to acquire the global point cloud, and therefore, global measurement under a large field of view is realized.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a global deformation measuring method and system based on image splicing.
The global deformation measurement method based on image splicing provided by the invention comprises the following steps:
step S1: arranging a camera around an object to be measured according to requirements, and calculating and correcting internal and external parameters of the camera, including distortion parameters, by adopting a Zhang calibration method;
step S2: calibrating the DLT parameters of each camera by using the same three-dimensional calibration object so as to automatically combine the data of all the cameras to the same coordinate system;
and step S3: matching the acquired images by each group of cameras by using a 2D-DIC (two-dimensional digital computer), and generating a triangular mesh according to points in a region of interest (ROI);
and step S4: for each group of camera pairs, performing three-dimensional reconstruction under the same coordinate system by using the calibrated DLT parameters and the points matched with the 2D-DIC, and splicing the obtained point clouds of different cameras by a heuristic algorithm to generate a global point cloud;
step S5: and deriving fields of interest quantities including a displacement field and a strain field according to the three-dimensional coordinates of the vertexes of the triangular meshes in the global point cloud at different moments, and performing visual display.
Preferably, the step S2 includes:
world coordinate system O W -X w Y w Z w Is converted into a camera coordinate system O by a rotation matrix R and a translation matrix T C -X C Y C Z C The method comprises the following steps:
Figure BDA0003782704080000021
camera coordinate system O C -X C Y C Z C One point is converted to the image coordinate system o-xy by the following expression:
Figure BDA0003782704080000031
and finally converting into a pixel coordinate system:
Figure BDA0003782704080000032
the three formulas are combined and arranged into the following forms:
Figure BDA0003782704080000033
wherein L is 1L 11 11 DLT parameters; f is the focal length of the camera; uv is a pixel coordinate system; the origin of xy is denoted as (u) 0 ,v 0 )。
Preferably, the step S3 includes:
generating a super triangle containing all points according to the point set, sequentially inserting the points in the point set, and finding out a triangle of which an external circle contains the points from the triangle, wherein the specific steps are as follows: in two dimensions, the three vertices of the triangle are A, B and C, when the three points are ordered in a counterclockwise sequence, only when one point D is displaced in the circumscribed circle, the following formula holds:
Figure BDA0003782704080000034
and deleting the common edges of the influence triangles, skipping the step if only one influence triangle exists, connecting the insertion point with all the vertexes of the influence triangle, thereby completing the insertion of one point in the Delaunay triangle subdivision, and after the point is inserted, inserting the next point according to the flow until all the points in the point set are inserted.
Preferably, the local optimization process is performed after the insertion of the completion point: two triangles with common edges form a polygon, the maximum blank circle criterion is used for checking whether the fourth vertex of the polygon is in the circumcircle of the triangle, if so, the diagonal is exchanged, and the processing of the local optimization process is finished.
Preferably, the step S4 includes:
step S4.1: solving the three-dimensional coordinates by adopting a least square method, and calculating each group of camera pairs to obtain a point cloud;
step S4.2: identifying an overlapping area between the two point clouds, and deleting grids of the overlapping area from the boundary by using a heuristic algorithm until no overlapping exists;
step S4.3: the gap between the two point clouds was filled with a new triangular face using Delaunay triangulation.
The invention provides an image stitching-based global deformation measurement system, which comprises:
a module M1: arranging a camera around an object to be measured according to requirements, and calculating and correcting internal and external parameters of the camera, including distortion parameters, by adopting a Zhang calibration method;
a module M2: the DLT parameters of each camera are calibrated by using the same three-dimensional calibration object so as to realize the automatic combination of the data of all the cameras to the same coordinate system;
a module M3: matching the acquired images by each group of cameras by using a 2D-DIC (two-dimensional digital computer), and generating a triangular mesh according to points in a region of interest (ROI);
a module M4: for each group of camera pairs, performing three-dimensional reconstruction under the same coordinate system by using the calibrated DLT parameters and the points matched with the 2D-DIC, and splicing the obtained point clouds of different cameras by a heuristic algorithm to generate a global point cloud;
a module M5: and deriving fields of interest quantity including displacement fields and strain fields according to the three-dimensional coordinates of the vertexes of the triangular meshes in the global point cloud at different moments, and performing visual display.
Preferably, the module M2 comprises:
world coordinate system O W -X w Y w Z w Is converted into a camera coordinate system O by a rotation matrix R and a translation matrix T C -X C Y C Z C The method comprises the following steps:
Figure BDA0003782704080000041
camera coordinate system O C -X C Y C Z C One point is converted to the image coordinate system o-xy by the following expression:
Figure BDA0003782704080000042
and finally converting into a pixel coordinate system:
Figure BDA0003782704080000043
the three formulas are combined and arranged into the following forms:
Figure BDA0003782704080000044
wherein L is 1L 11 11 DLT parameters; f is the focal length of the camera; uv is a pixel coordinate system; the origin of xy is denoted as (u) 0 ,v 0 )。
Preferably, the module M3 comprises:
generating a super triangle containing all points according to the point set, sequentially inserting the points in the point set, and finding out a triangle of which the circumscribed circle contains the points in the triangle, wherein the method specifically comprises the following steps: in two dimensions, the three vertices of the triangle are A, B and C, respectively, and when the three points are ordered in a counterclockwise order, only when one point D is displaced within the circumscribed circle, the following formula holds:
Figure BDA0003782704080000051
and deleting the common edges of the influence triangles, skipping the step if only one influence triangle exists, connecting the insertion point with all the vertexes of the influence triangle, thereby completing the insertion of one point in the Delaunay triangle subdivision, and after the point is inserted, inserting the next point according to the flow until all the points in the point set are inserted.
Preferably, the local optimization process is performed after the insertion of the completion point: two triangles with common edges form a polygon, the maximum blank circle criterion is used for checking whether the fourth vertex of the polygon is in the circumcircle of the triangle, if so, the diagonal is exchanged, and the processing of the local optimization process is finished.
Preferably, the module M4 comprises:
module M4.1: solving the three-dimensional coordinates by adopting a least square method, and calculating each group of camera pairs to obtain a point cloud;
module M4.2: identifying an overlapping area between two point clouds, and deleting grids of the overlapping area from a boundary by using a heuristic algorithm until no overlapping exists;
module M4.3: the gap between the two point clouds was filled with a new triangular face using Delaunay triangulation.
Compared with the prior art, the invention has the following beneficial effects:
(1) The invention uses the three-dimensional calibration object, is captured by all cameras of a multi-view system, is calibrated by adopting a DLT method, can calibrate each camera according to a general reference coordinate system related to the three-dimensional calibration object, and can realize automatic merging of data to the same coordinate system even under the condition that the fields of view of different camera pairs are not overlapped;
(2) The invention uses a heuristic algorithm to judge the grid quality through the correlation coefficient, the distance of the grids in the overlapping area and the direction difference for the overlapping area among the point clouds, the low-quality grids are preferentially deleted and then spliced, so that the measurement precision is improved;
(3) When the two point clouds are spliced in a gap, a Delaunay triangulation method is applied between vertexes on the boundary of the grid, so that the minimum angle of the triangle is maximized, and the formation of a high-quality triangular grid is ensured;
(4) According to the invention, the obtained point clouds are spliced by a plurality of cameras, so that global point clouds are obtained and are subjected to corresponding post-processing, and strain measurement under a large field of view is realized.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a multi-view apparatus of the present invention;
FIG. 3 is a three-dimensional calibration object diagram;
FIG. 4 is a schematic diagram of matching and meshing between camera pair images;
FIG. 5 is a flow chart of Delaunay triangulation;
FIG. 6 is a stitching map between two point clouds;
FIG. 7 is a correlation coefficient plot resulting from post-processing;
fig. 8 is a strain diagram obtained by the post-treatment.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example (b):
as shown in fig. 1, the global deformation measurement technique based on image stitching provided by the present invention includes the following steps:
step S1: and arranging a proper number of cameras (two or more) around the object to be detected according to requirements, and calculating and correcting internal and external parameters of the cameras, including distortion parameters, by using a Zhang calibration method. As shown in fig. 2, a corresponding number of cameras (two or more) are provided as needed depending on the size of the area to be measured. Then, each camera is used for shooting a plurality of pictures of different chequer postures, then the internal and external parameters of the camera are calculated by using the Zhang calibration method, and the image is corrected according to the obtained parameters
Step S2: and calibrating 11 DLT parameters of each camera through the three-dimensional calibration object. From the camera imaging, the world coordinate system O W -X w Y w Z w Can be converted into a camera coordinate system O by a rotation matrix and a translation matrix C -X C Y C Z C The method comprises the following steps:
Figure BDA0003782704080000061
camera coordinate system O C -X C Y C Z C One point in this can be translated to the image coordinate system o-xy by the following expression:
Figure BDA0003782704080000071
and finally converting into a pixel coordinate system:
Figure BDA0003782704080000072
the three formulas can be combined to be arranged into the following forms:
Figure BDA0003782704080000073
one point in the world coordinate system can be mapped into the image through 11 DLT parameters, and because each camera adopts the same calibration object, automatic merging of data into the same coordinate system can be realized. The present invention uses a cylindrical calibration object as shown in fig. 3 for calibration, and uses a black rectangular area as a control point, and since one control point can only provide two equations, a minimum of 6 control points are required for calibrating 11 parameters. For better robustness, more points are often selected to calibrate the DLT parameters, as shown in fig. 3.
And step S3: each set of camera pairs is matched using a 2D-DIC and a triangular mesh is generated from the points within the ROI region. Adjacent cameras may form camera pairs and a 2D-DIC is used to match the left and right images of each camera pair in preparation for the next three-dimensional reconstruction. After the image matching is completed, the triangular meshes are divided according to the points in the ROI region by using Delaunay triangulation, and the results are shown in fig. 4a and 4 b.
A Delaunay triangulation flow chart, as shown in fig. 5. Firstly, a super triangle containing all points is generated according to the point set, and the triangle is not unique as long as all points can be contained. The points in the point set are inserted in turn, and a triangle (called an influence triangle) of which the circumscribed circle contains the point is found out in the triangle. The method is as follows, assuming that in two dimensions, the three vertices of the triangle are A, B and C, respectively, when the three points are sorted in a counterclockwise order, the following column holds only when one point D is displaced within the circumscribed circle.
Figure BDA0003782704080000074
The insertion of a point in the Delaunay triangulation is done by removing the common edges of the influencing triangles (skipping this step if there is only one influencing triangle) and joining the insertion point to all vertices of the influencing triangle.
Then, local optimization processing is carried out: (1) Two triangles having a common side form a polygon. (2) A check is made to see if the fourth vertex is within the circumscribed circle of the triangle using the maximum empty circle criterion (the circle defined by the three points does not contain the fourth point). (3) If so, the diagonal line is exchanged, and the local optimization process is completed.
After the point insertion is completed, inserting the next point according to the flow until all points in the point set are inserted.
And step S4: performing three-dimensional reconstruction by using the calibrated DLT parameters and points matched with the 2D-DIC, and splicing the obtained point clouds by different cameras through a heuristic algorithm to generate a global point cloud as shown in FIG. 6 (the image is amplified);
wherein, step S4 specifically includes the following steps:
step S4.1: and solving the three-dimensional coordinates by adopting a least square method, and calculating each group of camera pairs to obtain a point cloud. According to the triangulation principle, the position of a point in the left and right images and the calibrated DLT parameters can be used for calculating the coordinates of the point in a world coordinate system. Since each point in the left and right images can provide two equations, and the point has only three-dimensional coordinates, the least squares method is used here to solve the three-dimensional coordinates.
Step S4.2: identifying the overlapping area between two point clouds, first solving the normal vector at the point (fitting a plane through the point and several nearby points, namely the normal vector at the point) for all the points in the first object surface point cloud. The distance of these points from the second point cloud along the normal vector direction is calculated. The distance can be calculated only in the overlapping area between the point clouds, and in the non-overlapping area, because the mesh divided by the second point cloud and the normal vector direction has no intersection point, the distance calculated in the non-overlapping area is infinite. The overlapping area on the first point cloud can be identified by the calculated distance, and the overlapping area is identified for the second point cloud by the same method.
In step S3, a plurality of triangular meshes have been divided according to the point cloud, and the triangle in the overlap region on the two point clouds is traversed to find the boundary according to the fact that the edge at the boundary is not shared by the two triangles. And for points on the boundary of the overlapping region, using the correlation coefficient obtained by 2D-DIC matching (the larger the matching effect is worse), and setting the maximum correlation coefficient value to eliminate the points with poor matching effect. And then calculating the distance of each point of the boundary according to the algorithm in S4.2, finding out the point with the maximum distance, deleting the triangular mesh taking the point as the vertex, and continuously executing the step until no overlapping area exists.
However, although some meshes do not overlap, the distance is too close, which affects the next point cloud registration, resulting in the generation of low-quality triangular meshes with small angles. And aiming at each point on the boundary of the point cloud 1, searching the corresponding point with the shortest distance on the boundary of the point cloud 2, and calculating the distance. And (3) according to the threshold value which is 0.5 times of the average length of all grid edges (the value is determined according to requirements), eliminating boundary points with smaller distances, and deleting the triangular grids taking the points as vertexes. Finally, to avoid the appearance of disconnected regions, a triangular mesh with three sides being boundaries is deleted.
Step S4.3: and aiming at each point on the boundary of the point cloud 1, searching the corresponding point with the shortest distance on the boundary of the point cloud 2, and calculating the distance. And (3) removing boundary points with larger distances by taking 2.5 times of the average length of all grid edges (the value is determined according to requirements) as a threshold, and performing the operation on the boundary points of the point cloud 2. And (5) filling a gap between the two point clouds by using a new triangular surface by using a Delaunay triangulation method for the screened boundary points. For the gaps between the point clouds, a Delaunay triangulation method is applied between the vertices on the mesh boundary to maximize the minimum angle of the triangle and ensure the formation of a high-quality triangular mesh, as shown in fig. 6, two different point clouds are present in the mesh, and the mesh between the two point clouds is a gap-filling mesh.
Step S5: and deriving fields of interest quantities such as a displacement field, a strain field and the like according to the three-dimensional coordinates of the vertexes of the triangular meshes in the global point cloud at different moments, and finally carrying out visual display. Assuming that each cell has a uniform deformation field inside, and independent of adjacent data points and numerical derivatives, the deformation gradient tensor F is calculated by using the variational of the triangular Cosserat point cell method, and fields of interest such as displacement fields, strain fields and the like are derived.
Verification is made below by taking as an example a cylindrical object having a suitable artificial speckle on its surface.
The object is shot by using three cameras, two groups of camera pairs can be formed, the two point clouds are spliced, and a correlation coefficient graph (shown in fig. 7) and a strain graph (shown in fig. 8) of the object global point cloud are derived. The correlation coefficient shows that most areas of the surface of the object have good matching effect and high measurement accuracy, and the maximum correlation coefficient does not exceed 0.06. This example demonstrates the effectiveness of the method of the patent.
The invention provides an image stitching-based global deformation measurement system, which comprises: a module M1: arranging a camera around an object to be measured according to requirements, and calculating and correcting internal and external parameters of the camera, including distortion parameters, by adopting a Zhang calibration method; a module M2: calibrating 11 DLT parameters of each camera by using the same three-dimensional calibration object so as to automatically merge data of all cameras into the same coordinate system; a module M3: matching the acquired images by each group of cameras by using a 2D-DIC (two-dimensional digital computer), and generating a triangular mesh according to points in a region of interest (ROI); a module M4: for each group of camera pairs, performing three-dimensional reconstruction under the same coordinate system by using the calibrated DLT parameters and points matched with the 2D-DIC, and splicing the obtained point clouds of different cameras by a heuristic algorithm to generate a global point cloud; a module M5: and deriving fields of interest quantity including displacement fields and strain fields according to the three-dimensional coordinates of the vertexes of the triangular meshes in the global point cloud at different moments, and finally performing visual display.
The module M2 comprises: world coordinate system O W -X w Y w Z w Is converted into a camera coordinate system O by a rotation matrix R and a translation matrix T C -X C Y C Z C The method comprises the following steps:
Figure BDA0003782704080000091
camera coordinate system O C -X C Y C Z C One point is converted to the image coordinate system o-xy by the following expression:
Figure BDA0003782704080000101
and finally converting into a pixel coordinate system:
Figure BDA0003782704080000102
the three formulas are combined and arranged into the following forms:
Figure BDA0003782704080000103
wherein L is 1L 11 11 DLT parameters; f is the focal length of the camera; uv is a pixel coordinate system; the origin of xy is denoted as (u) 0 ,v 0 )。
The module M3 comprises: generating a super triangle containing all points according to the point set, sequentially inserting the points in the point set, and finding out a triangle of which the circumscribed circle contains the points in the triangle, wherein the method specifically comprises the following steps: in two dimensions, the three vertices of the triangle are A, B and C, respectively, and when the three points are ordered in a counterclockwise order, only when one point D is displaced within the circumscribed circle, the following formula holds:
Figure BDA0003782704080000104
and deleting the common edges of the influencing triangles, skipping the step if only one influencing triangle exists, connecting the insertion point with all the vertexes of the influencing triangle so as to complete the insertion of one point in the Delaunay triangulation, and after the point is inserted, inserting the next point according to the flow until all the points in the point set are inserted.
And performing local optimization processing after the insertion of the points is completed: two triangles with common edges form a polygon, the maximum empty circle criterion is used for checking whether the fourth vertex of the polygon is in the circumcircle of the triangle, if so, the diagonal is exchanged, and the processing of the local optimization process is finished.
The module M4 comprises: module M4.1: solving the three-dimensional coordinates by adopting a least square method, and calculating each group of camera pairs to obtain a point cloud; module M4.2: identifying an overlapping area between two point clouds, and deleting grids of the overlapping area from a boundary by using a heuristic algorithm until no overlapping exists; module M4.3: the gap between the two point clouds was filled with a new triangular surface using Delaunay triangulation.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A global deformation measurement method based on image stitching is characterized by comprising the following steps:
step S1: arranging a camera around an object to be measured according to requirements, and calculating and correcting internal and external parameters of the camera, including distortion parameters, by adopting a Zhang calibration method;
step S2: the DLT parameters of each camera are calibrated by using the same three-dimensional calibration object so as to realize the automatic combination of the data of all the cameras to the same coordinate system;
and step S3: matching the acquired images by each group of cameras by using a 2D-DIC (two-dimensional rotation digital computer), and generating a triangular mesh according to points in the ROI (region of interest);
and step S4: for each group of camera pairs, performing three-dimensional reconstruction under the same coordinate system by using the calibrated DLT parameters and the points matched with the 2D-DIC, and splicing the obtained point clouds of different cameras by a heuristic algorithm to generate a global point cloud;
step S5: and deriving fields of interest quantity including displacement fields and strain fields according to the three-dimensional coordinates of the vertexes of the triangular meshes in the global point cloud at different moments, and performing visual display.
2. The image stitching-based global deformation measurement method according to claim 1, wherein the step S2 comprises:
world coordinate system O W -X w Y w Z w Is converted into a camera coordinate system O by a rotation matrix R and a translation matrix T C -X C Y C Z C The method comprises the following steps:
Figure FDA0003782704070000011
camera coordinate system O C -X C Y C Z C One point is converted to the image coordinate system o-xy by the following expression:
Figure FDA0003782704070000012
and finally converting into a pixel coordinate system:
Figure FDA0003782704070000013
the three formulas are combined and arranged into the following forms:
Figure FDA0003782704070000021
wherein L is 1 ~L 11 11 DLT parameters; f is the focal length of the camera; uv is a pixel coordinate system; the origin of xy is denoted as (u) 0 ,v 0 )。
3. The image stitching-based global deformation measurement method according to claim 1, wherein the step S3 comprises:
generating a super triangle containing all points according to the point set, sequentially inserting the points in the point set, and finding out a triangle of which an external circle contains the points from the triangle, wherein the specific steps are as follows: in two dimensions, the three vertices of the triangle are A, B and C, respectively, and when the three points are ordered in a counterclockwise order, only when one point D is displaced within the circumscribed circle, the following formula holds:
Figure FDA0003782704070000022
and deleting the common edges of the influencing triangles, skipping the step if only one influencing triangle exists, connecting the insertion point with all the vertexes of the influencing triangle so as to complete the insertion of one point in the Delaunay triangulation, and after the point is inserted, inserting the next point according to the flow until all the points in the point set are inserted.
4. The image stitching-based global deformation measurement method according to claim 3, wherein after the insertion of the points is completed, local optimization processing is performed: two triangles with common edges form a polygon, the maximum empty circle criterion is used for checking whether the fourth vertex of the polygon is in the circumcircle of the triangle, if so, the diagonal is exchanged, and the processing of the local optimization process is finished.
5. The image stitching-based global deformation measurement method according to claim 1, wherein the step S4 comprises:
step S4.1: solving the three-dimensional coordinates by adopting a least square method, and calculating each group of camera pairs to obtain a point cloud;
step S4.2: identifying an overlapping area between two point clouds, and deleting grids of the overlapping area from a boundary by using a heuristic algorithm until no overlapping exists;
step S4.3: the gap between the two point clouds was filled with a new triangular face using Delaunay triangulation.
6. A global deformation measurement system based on image stitching is characterized by comprising:
a module M1: arranging a camera around an object to be measured according to requirements, and calculating and correcting internal and external parameters of the camera, including distortion parameters, by adopting a Zhang calibration method;
a module M2: the DLT parameters of each camera are calibrated by using the same three-dimensional calibration object so as to realize the automatic combination of the data of all the cameras to the same coordinate system;
a module M3: matching the acquired images by each group of cameras by using a 2D-DIC (two-dimensional digital computer), and generating a triangular mesh according to points in a region of interest (ROI);
a module M4: for each group of camera pairs, performing three-dimensional reconstruction under the same coordinate system by using the calibrated DLT parameters and the points matched with the 2D-DIC, and splicing the obtained point clouds of different cameras by a heuristic algorithm to generate a global point cloud;
a module M5: and deriving fields of interest quantity including displacement fields and strain fields according to the three-dimensional coordinates of the vertexes of the triangular meshes in the global point cloud at different moments, and performing visual display.
7. The image stitching-based global deformation measurement system according to claim 6, wherein the module M2 comprises:
world coordinate system O W -X w Y w Z w Is converted into a camera coordinate system O by a rotation matrix R and a translation matrix T C -X C Y C Z C The method comprises the following steps:
Figure FDA0003782704070000031
camera coordinate system O C -X C Y C Z C One point is converted to the image coordinate system o-xy by the following expression:
Figure FDA0003782704070000032
and finally converting into a pixel coordinate system:
Figure FDA0003782704070000033
the three formulas are combined and arranged into the following forms:
Figure FDA0003782704070000034
wherein L is 1 ~L 11 11 DLT parameters; f is the focal length of the camera; uv is a pixel coordinate system; the origin of xy is denoted as (u) 0 ,v 0 )。
8. The image stitching-based global deformation measurement system according to claim 6, wherein the module M3 comprises:
generating a super triangle containing all points according to the point set, sequentially inserting the points in the point set, and finding out a triangle of which the circumscribed circle contains the points in the triangle, wherein the method specifically comprises the following steps: in two dimensions, the three vertices of the triangle are A, B and C, respectively, and when the three points are ordered in a counterclockwise order, only when one point D is displaced within the circumscribed circle, the following formula holds:
Figure FDA0003782704070000041
and deleting the common edges of the influencing triangles, skipping the step if only one influencing triangle exists, connecting the insertion point with all the vertexes of the influencing triangle so as to complete the insertion of one point in the Delaunay triangulation, and after the point is inserted, inserting the next point according to the flow until all the points in the point set are inserted.
9. The image stitching-based global deformation measurement system according to claim 8, wherein local optimization processing is performed after insertion of the completion point: two triangles with common edges form a polygon, the maximum blank circle criterion is used for checking whether the fourth vertex of the polygon is in the circumcircle of the triangle, if so, the diagonal is exchanged, and the processing of the local optimization process is finished.
10. The image stitching-based global deformation measurement system according to claim 6, wherein the module M4 comprises:
module M4.1: solving the three-dimensional coordinates by adopting a least square method, and calculating each group of camera pairs to obtain a point cloud;
module M4.2: identifying an overlapping area between the two point clouds, and deleting grids of the overlapping area from the boundary by using a heuristic algorithm until no overlapping exists;
module M4.3: the gap between the two point clouds was filled with a new triangular face using Delaunay triangulation.
CN202210933752.8A 2022-08-04 2022-08-04 Global deformation measurement method and system based on image stitching Pending CN115272080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210933752.8A CN115272080A (en) 2022-08-04 2022-08-04 Global deformation measurement method and system based on image stitching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210933752.8A CN115272080A (en) 2022-08-04 2022-08-04 Global deformation measurement method and system based on image stitching

Publications (1)

Publication Number Publication Date
CN115272080A true CN115272080A (en) 2022-11-01

Family

ID=83749928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210933752.8A Pending CN115272080A (en) 2022-08-04 2022-08-04 Global deformation measurement method and system based on image stitching

Country Status (1)

Country Link
CN (1) CN115272080A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115823970A (en) * 2022-12-26 2023-03-21 浙江航天润博测控技术有限公司 Visual shot trajectory generation system
CN116739898A (en) * 2023-06-03 2023-09-12 广州市西克传感器有限公司 Multi-camera point cloud splicing method and device based on cylindrical characteristics

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115823970A (en) * 2022-12-26 2023-03-21 浙江航天润博测控技术有限公司 Visual shot trajectory generation system
CN116739898A (en) * 2023-06-03 2023-09-12 广州市西克传感器有限公司 Multi-camera point cloud splicing method and device based on cylindrical characteristics
CN116739898B (en) * 2023-06-03 2024-04-30 广东西克智能科技有限公司 Multi-camera point cloud splicing method and device based on cylindrical characteristics

Similar Documents

Publication Publication Date Title
JP5029618B2 (en) Three-dimensional shape measuring apparatus, method and program by pattern projection method
CN115272080A (en) Global deformation measurement method and system based on image stitching
Zhou et al. A novel laser vision sensor for omnidirectional 3D measurement
CN113592721B (en) Photogrammetry method, apparatus, device and storage medium
WO2011145285A1 (en) Image processing device, image processing method and program
Yang et al. Flexible and accurate implementation of a binocular structured light system
CN111192235A (en) Image measuring method based on monocular vision model and perspective transformation
CN110044374A (en) A kind of method and odometer of the monocular vision measurement mileage based on characteristics of image
Shan et al. A calibration method for stereovision system based on solid circle target
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
WO2020199439A1 (en) Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method
JP2002516443A (en) Method and apparatus for three-dimensional display
TW201310004A (en) Correlation arrangement device of digital images
CN102881040A (en) Three-dimensional reconstruction method for mobile photographing of digital camera
Zong et al. A high-efficiency and high-precision automatic 3D scanning system for industrial parts based on a scanning path planning algorithm
JP2017098859A (en) Calibration device of image and calibration method
Luhmann 3D imaging: how to achieve highest accuracy
CN113808019A (en) Non-contact measurement system and method
Franco et al. RGB-D-DIC technique for low-cost 3D displacement fields measurements
GB2569609A (en) Method and device for digital 3D reconstruction
CN111583388A (en) Scanning method and device of three-dimensional scanning system
CN116433841A (en) Real-time model reconstruction method based on global optimization
Gao et al. Full‐field deformation measurement by videogrammetry using self‐adaptive window matching
Skuratovskyi et al. Outdoor mapping framework: from images to 3d model
Rasztovits et al. Comparison of 3D reconstruction services and terrestrial laser scanning for cultural heritage documentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination