CN115187730A - Structural strain measuring method and device, measuring terminal and storage medium - Google Patents

Structural strain measuring method and device, measuring terminal and storage medium Download PDF

Info

Publication number
CN115187730A
CN115187730A CN202210849292.0A CN202210849292A CN115187730A CN 115187730 A CN115187730 A CN 115187730A CN 202210849292 A CN202210849292 A CN 202210849292A CN 115187730 A CN115187730 A CN 115187730A
Authority
CN
China
Prior art keywords
image
strain
camera
dimensional reconstruction
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210849292.0A
Other languages
Chinese (zh)
Inventor
汪承志
李鹏飞
李志明
王逸飞
陈东红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jiaotong University
Original Assignee
Chongqing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jiaotong University filed Critical Chongqing Jiaotong University
Priority to CN202210849292.0A priority Critical patent/CN115187730A/en
Publication of CN115187730A publication Critical patent/CN115187730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a structural strain measurement method and device, a measurement terminal and a storage medium, and belongs to the field of image processing. The method comprises the steps of obtaining a first image of a structure to be detected, which is collected by a first camera, and a second image of the structure to be detected, which is collected by a second camera; performing three-dimensional reconstruction on the first image and the second image to obtain three-dimensional reconstruction parameter data; and determining the structural strain quantity of the structure to be measured according to the three-dimensional reconstruction parameter data. This application is through using two cameras to gather the image, through handling the deformation process with the unable observation of measurement people's eye to the image, effectively practices thrift cost and simple to operate. And the method is suitable for measuring the structural strain of various scenes, effectively reduces the waste of manpower, material resources and financial resources, avoids the too fast consumption of equipment, and achieves the optimization of economy and benefits.

Description

Structural strain measurement method and device, measurement terminal and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a structural strain measurement method, an apparatus, a measurement terminal, and a storage medium.
Background
In large-scale projects such as water conservancy projects and bridge projects, people often need to know the real-time stress operation conditions of the projects in actual conditions for making subsequent schemes, symptoms which cannot be perceived by human eyes often exist in various conditions in some large-scale projects, and people are greatly helpful for making subsequent schemes by detecting micro-strain in the large-scale projects.
However, the conventional measurement methods mainly employ the following 2 types: 1. the strain gauge measures strain: VCE-4200 vibrating wire strain gauge manufactured by Kyoco company, USA, is buried in mass concrete for monitoring the change of concrete strain for a long time. The strain gauge mainly comprises a vibrating wire type steel wire, a thermistor, an exciting and receiving coil, a protection tube and the like. In the experiment [1] for measuring the strain of the dam panel of the Tang river hydropower station, the Tang river hydropower station is formed by combining two dam shapes, namely a concrete gravity dam section and a concrete panel rockfill dam section. Aiming at the combined dam body, 3 measuring points are arranged on each section of a concrete face rockfill dam with the sections of 0+180 and 0+330, 2 strain gauges (two directions) and 1 stress-free gauge are arranged on each measuring point, 6 measuring points, 12 strain gauges and 6 stress-free gauges are arranged in total, so that the stress state of the concrete face rockfill dam in the construction period and the application period is monitored in real time, and the whole safe operation of the dam is ensured. And secondly, an electrical measurement method is adopted, the application technology of the electrical measurement method is mature, and a common resistance type strain gauge measures strain parameters. The working principle of the strain gauge is that by using the resistance strain effect, when the strain gauge deforms, the length and the cross section area of the resistance wire inside the strain gauge change, and therefore the resistance value is friendly to the environment.
However, no matter which scheme is adopted, direct contact with a building is required, so that the cost is high and the loss is fast. Or certain manpower and material resources are needed to install the related equipment, and the installation process is relatively complicated.
Therefore, how to solve the above problems is a problem that needs to be solved at present.
Disclosure of Invention
The present application provides a structural strain measurement method, device, measurement terminal and storage medium, aiming to solve the above problems.
In a first aspect, the present application provides a structural strain measurement method, where the method is applied to a measurement terminal, where the measurement terminal includes a first camera and a second camera, and the method includes:
acquiring a first image of a structure to be detected acquired by the first camera and a second image of the structure to be detected acquired by the second camera;
performing three-dimensional reconstruction on the first image and the second image to obtain three-dimensional reconstruction parameter data;
and determining the structural dependent variable of the structure to be detected according to the three-dimensional reconstruction parameter data.
In a possible embodiment, the included angle of the line of sight of the first camera and the second camera is less than 15 degrees.
In a possible embodiment, the structural variables include: displacement; the determining the dependent variable of the structure to be measured according to the three-dimensional reconstruction parameter data comprises:
acquiring a three-dimensional coordinate before movement and a three-dimensional coordinate after movement of each coordinate point in the three-dimensional reconstruction parameter data;
and determining the displacement of each coordinate point according to the three-dimensional coordinate before the movement and the three-dimensional coordinate after the movement.
In one possible embodiment, the structural strain comprises: the amount of rigid body motion; the determining the dependent variable of the structure to be measured according to the three-dimensional reconstruction parameter data comprises:
acquiring 3D-DIC data in the three-dimensional reconstruction parameter data;
calculating real position coordinates of point cloud data and points in the 3D-DIC data;
determining point cloud mass centers and point mass centers respectively corresponding to the point cloud data and the real position coordinates of the points;
determining a point cloud difference value of the point cloud data and the point cloud centroid and a point difference value of the real position coordinates of the point and the point centroid;
forming a target matrix based on the point cloud difference values and the point difference values;
performing singular value decomposition on the target matrix to obtain singular values;
and processing the singular value to obtain a translation vector from the center of mass of the reconstructed coordinate to the true coordinate, wherein the translation vector is used as the amount of rigid motion.
In a possible embodiment, the structural variables include: the amount of strain; the determining the dependent variable of the structure to be measured according to the three-dimensional reconstruction parameter data comprises:
acquiring a position vector of a point corresponding to a frame of each frame in the three-dimensional reconstruction parameter data;
and obtaining the minimum principal strain and the maximum principal strain of each triangular surface, which are not zero, according to the position vector.
In a possible embodiment, obtaining the minimum principal strain and the maximum principal strain of each triangular face, which are not zero, from the position vector includes:
acquiring coordinates of the vertex of each triangle;
calculating a reference orientation vector from the coordinates;
calculating a current orientation vector according to the reference orientation vector;
calculating a reciprocal vector according to the current pointing vector;
determining a deformation gradient tensor according to the reciprocal vector;
calculating an Euler-Almansia finite strain tensor according to the deformation gradient tensor to obtain an eigenvector and an eigenvalue of the Euler-Almansia finite strain tensor;
and calculating to obtain the minimum principal strain and the maximum principal strain according to the eigenvalue and the eigenvector.
In a possible embodiment, the calculating the minimum principal strain and the maximum principal strain according to the eigenvalue and the eigenvector includes:
calculating a minimum value based on the characteristic value and the characteristic vector to obtain a minimum principal strain; and a process for the preparation of a coating,
and solving the maximum value based on the characteristic value and the characteristic vector to obtain the maximum principal strain.
In a second aspect, the present application provides a structural strain measurement apparatus, the apparatus is applied to a measurement terminal, the measurement terminal includes a first camera and a second camera, the apparatus includes:
the acquisition unit is used for acquiring a first image of the structure to be detected acquired by the first camera and a second image of the structure to be detected acquired by the second camera;
the processing unit is used for carrying out three-dimensional reconstruction on the first image and the second image to obtain three-dimensional reconstruction parameter data;
and the measuring unit is used for determining the structural strain of the structure to be measured according to the three-dimensional reconstruction parameter data.
In a third aspect, the present application further provides a measurement terminal, including:
the first camera is used for acquiring a first image;
the second camera is used for acquiring a second image;
a memory for storing executable instructions;
a processor, configured to implement the structural strain measurement method according to any one of the first aspect when executing the executable instructions stored in the memory, so as to process the first image and the second image to obtain a structural strain of the structure to be measured.
In a fourth aspect, the present application further provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processing device to perform the steps of the structural strain measurement method according to any one of the first aspect.
According to the structural strain measurement method, the structural strain measurement device, the structural strain measurement terminal and the storage medium, a first image of a structure to be measured acquired by the first camera and a second image of the structure to be measured acquired by the second camera are acquired; performing three-dimensional reconstruction on the first image and the second image to obtain three-dimensional reconstruction parameter data; and determining the structural strain quantity of the structure to be measured according to the three-dimensional reconstruction parameter data. This application is through using two cameras to gather the image, through handling the deformation process with the unable observation of measurement people's eye to the image, effectively practices thrift cost and simple to operate. And the method is suitable for measuring the structural strain of various scenes, effectively reduces the waste of manpower, material resources and financial resources, avoids the too fast consumption of equipment, and achieves the optimization of economy and benefits.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic structural diagram of a measurement terminal according to a first embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for measuring structural strain according to a second embodiment of the present disclosure;
fig. 3 is a functional block diagram of a structural strain measurement device according to a third embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
First embodiment
Fig. 1 is a schematic structural diagram of a measurement terminal according to an embodiment of the present application, and in the present application, a measurement terminal 100 for implementing an example of a structural strain measurement method and an apparatus according to the embodiment of the present application may be described by using the schematic diagram shown in fig. 1.
As shown in fig. 1, a schematic diagram of a measurement terminal 100 includes one or more processors 102, one or more memory devices 104, a first camera 106, and a second camera 108, which are interconnected via a bus system and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the measurement terminal 100 shown in fig. 1 are only exemplary and not limiting, and the measurement terminal may have some of the components shown in fig. 1 and may have other components and structures not shown in fig. 1 as needed.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the measurement terminal 100 to perform desired functions.
It should be understood that the processor 102 in the embodiments of the present application may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media.
It should be appreciated that the storage 104 in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct bus RAM (DR RAM).
On which one or more computer program instructions may be stored that may be executed by processor 102 to implement the client functionality (implemented by the processor) in the embodiments of the application described below, and/or other desired functionality. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
Second embodiment:
referring to a flowchart of a structural strain measurement method shown in fig. 2, the method is applied to a measurement terminal, where the measurement terminal includes a first camera and a second camera, and the method specifically includes the following steps:
step S201, a first image of the structure to be measured acquired by the first camera and a second image of the structure to be measured acquired by the second camera are acquired.
And the included angle of the sight lines of the first camera and the second camera is less than 15 degrees. And the first camera and the second camera are in the position range which is as close to 90 degrees as possible with the included angle of the surface to be observed.
It should be understood that the shooting areas of the two cameras are the same piece of shooting area.
Optionally, one of the first camera and the second camera is used as a reference camera, and the other is a morphing camera.
Step S202, three-dimensional reconstruction is carried out on the first image and the second image, and three-dimensional reconstruction parameter data are obtained.
As an embodiment, the three-dimensional reconstruction of the first image and the second image comprises: marking a region of interest on the first image and the second image; and performing three-dimensional reconstruction on the marked image.
It should be noted that when performing three-dimensional reconstruction: selecting one or more image files to be reconstructed, and converting relevant parameters (which are used for calibrating real world coordinates) in the DLTstruct _ cam _. Mat file which is calculated and stored into three-dimensional points and curved surfaces; in this step, DLT parameters
Figure BDA0003752667550000081
And
Figure BDA0003752667550000082
c associated with a camera k And C l (k and l denote the index of the camera in a particular stereo pair) for converting each pair of corresponding image points from their image coordinates to 3D coordinates (X, Y, Z) by:
Figure BDA0003752667550000083
equation (5) is rearranged into the table U = AP:
Figure BDA0003752667550000091
Figure BDA0003752667550000092
Figure BDA0003752667550000093
then, a least squares solution of P is obtained: p = [ A ] T A]A T U.(9)。
Therefore, a three-dimensional point cloud is obtained from each stereo pair, and since the coordinates of the control points on calibration are represented in the global coordinate system, all the point clouds are automatically reconstructed in the global coordinate system without performing any additional coordinate transformation process.
Then, distortion correction is performed by selecting the camera CBparameters _ camera _ # file of the camera concerned.
And finally, reconstructing the three-dimensional points by using a DLT algorithm, specifically: coordinates [ X ] of each set of N reconstructed points on the calibration object n ,Y n ,Z n ](N =1,2, \8230;, N), on the calibration object, and the associated true coordinates
Figure BDA0003752667550000094
Can be used to calculate the reconstruction error [ Delta X ] n ,ΔY n ,ΔZ n ],
Figure BDA0003752667550000095
Figure BDA0003752667550000096
Mean error e M Root Mean Square (RMS) error ε R Is defined as:
Figure BDA0003752667550000097
Figure BDA0003752667550000098
meanwhile, when three-dimensional reconstruction is carried out, the following steps are also carried out:
and step A, calculating the centroid of the surface, and taking the average value of components of each point coordinate through a plane determined by three points to obtain a centroid coordinate.
And B, calculating the displacement of the point corresponding to the frame and the first frame, and calculating through a formula of three-dimensional Euclidean distance to obtain the displacement of the point.
Step C, sewing the surface: the meshes obtained from each camera pair are independent, not connected to each other, and with merged surfaces, the overlap between the meshes needs to be local. To construct a continuous grid, it is necessary to decompose it and stitch adjacent grids together.
And finally, after three-dimensional reconstruction, storing the parameter data of the three-dimensional reconstruction.
Optionally, the three-dimensional reconstruction parameter data includes, but is not limited to: worst surface correlation coefficients, combined worst correlation coefficients, a centroid of a surface calculated through points, a position vector of a point corresponding to a frame of each frame, displacement of a corresponding point calculated through a Euclidean distance formula, three-dimensional coordinates of each point, a calibrated DLT parameter of each camera, 2D-DIC data and 3D-DIC data.
It should be noted that the above equation (5) is obtained as follows:
two key issues due to the implementation of multi-view setup are system calibration (for stereo triangulation) and data merging (for merging results from multiple stereo pairs). Each camera has a typical stereo pair and the calibration is done to find intrinsic parameters (defining the geometric and optical properties of the camera) and extrinsic parameters (defining the position and orientation of the camera with respect to the parameter coordinate system). The intrinsic and extrinsic parameters are used together to describe the mapping of each three-dimensional object point P (X, Y, Z) in the global coordinate system to the camera sensor I (X) p ,y p ) The transformation of the image points is based on the direct linear transformation of the optical model and the comparator coordinates in the close-range photogrammetry. Specifically, the coordinates (X, Y, Z) are first rigidly converted to coordinates in the camera coordinate system by:
Figure BDA0003752667550000101
wherein R is ij (i, j =1,2, 3) and T x 、T y 、T z The components of the rotation matrix and the vector, respectively.
Normalizing the image coordinates:
Figure BDA0003752667550000111
in the sensor coordinate system, the sensor coordinate system is represented by:
Figure BDA0003752667550000112
combining equations (1) and (3), further assuming s =0, yields:
Figure BDA0003752667550000113
Figure BDA0003752667550000114
rearranging again to finally obtain the following formula:
Figure BDA0003752667550000115
wherein, the mapping relation between the two-dimensional image point and the three-dimensional world point is established by using equation (5). Since each control point provides two equations, at least six points are required to obtain 11 DLT parameters. More points are preferred, because the overdetermined system is solved by least square minimization, and the influence of experimental errors is reduced; in practice, this step requires a 3D (non-planar, requiring e.g. a cylinder) calibration object whose 3D position of the control points is known and with sufficient accuracy. The use of an axially symmetric three-dimensional object (currently investigated cylinder) represents the best solution for a 360-degree multi-view system, since images of the calibration object can be obtained from all cameras simultaneously. The calibration image is then analyzed using a plurality of DICs, and image points (control points corresponding to the calibration object) are detected and matched in rank order to known 3D positions (X, Y, Z).
Step S203, determining the structural strain quantity of the structure to be measured according to the three-dimensional reconstruction parameter data.
As an embodiment, the structural strain amount includes: displacement; step S203, including: acquiring a three-dimensional coordinate before movement and a three-dimensional coordinate after movement of each coordinate point in the three-dimensional reconstruction parameter data; and determining the displacement of each coordinate point according to the three-dimensional coordinate before the movement and the three-dimensional coordinate after the movement.
For example, suppose [ x ] 1 ,y 1 ,z 1 ]Is the coordinate of the first frame, [ x ] 2 ,y 2 ,z 2 ]As the coordinates after the movement, then the displacement
Figure BDA0003752667550000121
As another embodiment, the structural strain amount includes: the amount of rigid body motion; step S203, including: acquiring 3D-DIC data in the three-dimensional reconstruction parameter data; calculating real position coordinates of point cloud data and points in the 3D-DIC data; determining point cloud mass centers and point mass centers respectively corresponding to the point cloud data and the real position coordinates of the points; determining a point cloud difference value of the point cloud data and the point cloud centroid and a point difference value of the real position coordinates of the point and the point centroid; forming a target matrix based on the point cloud difference value and the point difference value; performing singular value decomposition on the target matrix to obtain singular values; and processing the singular value to obtain a translation vector from the center of mass of the reconstructed coordinate to the true coordinate, wherein the translation vector is used as the rigid motion quantity.
The specific calculation process is as follows:
calculating a point cloud Pfrom based on DIC3D (i.e. 3D-DIC data), the real position coordinates Pto of the points, the centroids ac, bc of Pfrom, pto, calculation da = Pfrom-ac, db = Pto-bc; matrix M = db T -da; singular value decomposition [ U, S, V ] of matrix M]= svd (M), where svd denotes the calculated singular value; calculating a determinant d: d = det (U × V) T ) (ii) a Construction:
Figure BDA0003752667550000122
rotation matrix R = U × S × V from reconstructed coordinates to real coordinates T (ii) a Finally, a translation vector t = bc from the centroid of the reconstructed coordinates to the real coordinates is obtained T -R×ac T
As still another embodiment, the structural strain amount includes: dependent variable; step S203, including: acquiring a position vector of a point corresponding to a frame of each frame in the three-dimensional reconstruction parameter data; and obtaining the minimum principal strain and the maximum principal strain of each triangular surface, which are not zero, according to the position vector.
Optionally, obtaining a minimum principal strain and a maximum principal strain of each triangular surface, which are not zero, according to the position vector includes: acquiring the coordinates of the top point of each triangle; calculating a reference orientation vector according to the coordinates; calculating a current orientation vector according to the reference orientation vector; calculating a reciprocal vector according to the current pointing vector; determining a deformation gradient tensor according to the reciprocal vector; calculating an Euler-Almansia finite strain tensor according to the deformation gradient tensor to obtain an eigenvector and an eigenvalue of the Euler-Almansia finite strain tensor; and calculating to obtain the minimum principal strain and the maximum principal strain according to the eigenvalue and the eigenvector.
In a possible embodiment, the calculating the minimum principal strain and the maximum principal strain according to the eigenvalue and the eigenvector includes: calculating a minimum value based on the characteristic value and the characteristic vector to obtain a minimum principal strain; and obtaining the maximum principal strain by solving the maximum value based on the characteristic value and the characteristic vector.
Specifically, the method comprises the following steps: the strain is calculated as follows:
D1 ij =Vref(F(j,2))-Vref(F(j,1));D2 ij =Vref(F(j,3))-Vref(F(j,1));
obtaining a reference orientation vector:
D3 ij =cross(D1 ij ,D2 ij )/norm(cross(D1 ij ,D2 ij ));
d1 ij =Vdef i (F(j,2))-Vdef i (F(j,1));
d2 ij =Vdef i (F(j,3))-Vdef i (F(j,1));
obtaining a current pointing vector:
d3 ij =cross(d1 ij ,d2 ij )/norm(cross(d1 ij ,d2 ij ));
Dnorm=cross(D1,D2)*D3 T
obtaining a reciprocal vector:
Drec1=cross(D2,D3)/Dnorm j ;Drec2=cross(D3,D1)/Dnorm j
calculating a deformation gradient tensor:
Fmat=d1*Drec1+d2*Drec2+d3*D3;
calculating the finite strain tensor of Euler-Almann
Figure BDA0003752667550000141
Solving a characteristic vector eigVece and a characteristic value eigVale of the matrix e;
constructing a new vector: eigVecPlanInd;
let eigVecPlanInd = [1,2,3];
the minimum main strain for each triangular face, epc1= min, not zero (eigVale (eigVecPlanInd (1), eigVecPlanInd (1)), eigVale (eigVecPlanInd (2), eigVecPlanInd (2))]);cross(D1 ij ,D2 ij ): represents the calculation of the inner product of D1 and D2, norm: representing the calculation of the Euclidean length;
a maximum principal strain epc2= max for each triangular face is obtained which is not zero ([ eigVale (eigVecPlanInd (1), eigvepplanind (1)), eigVale (eigvepplanind (2), eigvepplanind (2)) ]);
wherein Vref: a list of vertices representing triangular faces of a first time frame; vdef: one cell representing a warped time frame, one cell containing an array representing the 3D positions of the F vertices in the reference positions; f: a list of vertices representing triangular faces; d1 ij 、D2 ij 、D3 ij Vectors and reference orientation vectors respectively representing both sides of a jth triangular face of the frame in an ith time frame; d1 ij 、d2 ij 、d3 ij The vectors representing the two sides of the jth triangle face of the frame in the ith time frame and the current pointing vector, respectively.
It should be understood that Vref, vdef, and F are all parameters obtained during the three-dimensional reconstruction process.
Note that the eigVecPlanInd vector stores three indices 1,2,3; as with the formula: epc1= min ([ eigVale (eigvecplanndin (1), eigvepplanndin (1)), eigVale (eigvepplanndin (2), eigvepplanndin (2)) ]); the middle eigVecPlanInd (1) indicates that the first index value 1 is taken, and eigVale (eigVecPlanInd (1), eigVecPlanInd (1)) indicates that the element taken to the first row and column of the matrix eigVale, and so on, the eigVecPlanInd vector is mainly constructed artificially.
The relationship with eigVece is that when the maximum (minimum) principal strain is found (the maximum (minimum) principal strain is the maximum (minimum) characteristic value), the index epc2Ind of the maximum (minimum) characteristic value is found
After (epc 1 Ind), the eigVece eigenvector and the index corresponding to the eigenvalue are in one-to-one correspondence, and the eigenvector of the maximum (minimum) principal strain can be extracted from the eigenvector matrix through the index eigvaccplanind (epc 2 Ind) (eigvaccplanind (epc 1 Ind)), each eigenvector corresponding to one strain direction of the small triangular surface. The characteristic vector is used for storage and subsequent analysis, such as drawing a strain process image and the like, and the strain direction is calculated and included in the structural strain measurement.
In a possible embodiment, after step S203, the method further comprises: and selecting the type of the result to be drawn according to the UI interface selection.
Alternatively, this type includes, but is not limited to: an Arrman strain, lagrange strain tensor, euler-Arrman strain tensor; other strain or tensor changes may be computed according to corresponding algorithms for corresponding parameters in the stored 3D-DIC data, such as three-dimensional coordinates, displacement of points, etc.
The third embodiment:
referring to fig. 3, a structural strain measuring device 500 is shown, the structural strain measuring device 500 is applied to a measuring terminal, the measuring terminal includes a first camera and a second camera, and the structural strain measuring device 500 includes: an acquisition unit 510, a processing unit 520 and a measurement unit 530.
The acquiring unit 510 is configured to acquire a first image of the structure to be detected acquired by the first camera and a second image of the structure to be detected acquired by the second camera;
the processing unit 520 is configured to perform three-dimensional reconstruction on the first image and the second image to obtain three-dimensional reconstruction parameter data;
a measuring unit 530, configured to determine a structural dependent variable of the structure to be measured according to the three-dimensional reconstruction parameter data.
It should be noted that, for the specific functions of the structural strain measurement apparatus 500, reference is made to the description of the method embodiment, and no further description is given here.
Further, the present embodiment also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processing device, the method for measuring a structural strain provided in any of the foregoing embodiments is performed.
The computer program product of the structural strain measurement method and apparatus provided in the embodiments of the present application includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
It should be noted that the above embodiments may be implemented in whole or in part by software, hardware (e.g., a circuit), firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions described in accordance with the embodiments of the present application are produced in whole or in part when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, data center, etc., that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists singly, A and B exist simultaneously, and B exists singly, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, and may be understood with particular reference to the former and latter contexts.
In this application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.

Claims (10)

1. A structural strain measurement method is applied to a measurement terminal, wherein the measurement terminal comprises a first camera and a second camera, and the method comprises the following steps:
acquiring a first image of a structure to be detected acquired by the first camera and a second image of the structure to be detected acquired by the second camera;
performing three-dimensional reconstruction on the first image and the second image to obtain three-dimensional reconstruction parameter data;
and determining the structural dependent variable of the structure to be detected according to the three-dimensional reconstruction parameter data.
2. The method of claim 1, wherein the first camera and the second camera have an included angle of line of sight of less than 15 degrees.
3. The method of claim 1, wherein the structural strain comprises: displacement; the determining the dependent variable of the structure to be measured according to the three-dimensional reconstruction parameter data comprises:
acquiring a three-dimensional coordinate before movement and a three-dimensional coordinate after movement of each coordinate point in the three-dimensional reconstruction parameter data;
and determining the displacement of each coordinate point according to the three-dimensional coordinate before movement and the three-dimensional coordinate after movement.
4. The method of claim 1, wherein the structural strain comprises: the amount of rigid body motion; the determining the dependent variable of the structure to be measured according to the three-dimensional reconstruction parameter data comprises:
acquiring 3D-DIC data in the three-dimensional reconstruction parameter data;
calculating real position coordinates of point cloud data and points in the 3D-DIC data;
determining point cloud mass centers and point mass centers respectively corresponding to the point cloud data and the real position coordinates of the points;
determining a point cloud difference value of the point cloud data and the point cloud centroid and a point difference value of the real position coordinates of the point and the point centroid;
forming a target matrix based on the point cloud difference value and the point difference value;
performing singular value decomposition on the target matrix to obtain singular values;
and processing the singular value to obtain a translation vector from the center of mass of the reconstructed coordinate to the true coordinate, wherein the translation vector is used as the amount of rigid motion.
5. The method of claim 1, wherein the structural strain comprises: the amount of strain; the determining the dependent variable of the structure to be measured according to the three-dimensional reconstruction parameter data comprises:
acquiring a position vector of a point corresponding to a frame of each frame in the three-dimensional reconstruction parameter data;
and obtaining the minimum principal strain and the maximum principal strain of each triangular surface, which are not zero, according to the position vector.
6. The method of claim 5, wherein deriving a minimum principal strain for each triangular face that is not zero and a maximum principal strain from the position vector comprises:
acquiring the coordinates of the top point of each triangle;
calculating a reference orientation vector from the coordinates;
calculating a current orientation vector according to the reference orientation vector;
calculating a reciprocal vector according to the current pointing vector;
determining a deformation gradient tensor according to the reciprocal vector;
calculating an Euler-Almansia finite strain tensor according to the deformation gradient tensor to obtain an eigenvector and an eigenvalue of the Euler-Almansia finite strain tensor;
and calculating to obtain the minimum principal strain and the maximum principal strain according to the eigenvalue and the eigenvector.
7. The method of claim 6, wherein said calculating a minimum principal strain and a maximum principal strain from said eigenvalues and said eigenvectors comprises:
calculating a minimum value based on the characteristic value and the characteristic vector to obtain a minimum principal strain; and a process for the preparation of a coating,
and calculating the maximum value based on the characteristic value and the characteristic vector to obtain the maximum principal strain.
8. The structural strain measuring device is applied to a measuring terminal, wherein the measuring terminal comprises a first camera and a second camera, and the device comprises:
the acquisition unit is used for acquiring a first image of the structure to be detected acquired by the first camera and a second image of the structure to be detected acquired by the second camera;
the processing unit is used for carrying out three-dimensional reconstruction on the first image and the second image to obtain three-dimensional reconstruction parameter data;
and the measuring unit is used for determining the structural strain of the structure to be measured according to the three-dimensional reconstruction parameter data.
9. A measurement terminal, comprising:
the first camera is used for acquiring a first image;
the second camera is used for acquiring a second image;
a memory for storing executable instructions;
a processor, configured to implement the structural strain measurement method according to any one of claims 1 to 7 when executing the executable instructions stored in the memory, so as to process the first image and the second image to obtain a structural strain of the structure to be measured.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processing device, performs the steps of the structural strain measurement method according to any one of claims 1-7.
CN202210849292.0A 2022-07-19 2022-07-19 Structural strain measuring method and device, measuring terminal and storage medium Pending CN115187730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210849292.0A CN115187730A (en) 2022-07-19 2022-07-19 Structural strain measuring method and device, measuring terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210849292.0A CN115187730A (en) 2022-07-19 2022-07-19 Structural strain measuring method and device, measuring terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115187730A true CN115187730A (en) 2022-10-14

Family

ID=83520221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210849292.0A Pending CN115187730A (en) 2022-07-19 2022-07-19 Structural strain measuring method and device, measuring terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115187730A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116929311A (en) * 2023-09-19 2023-10-24 中铁第一勘察设计院集团有限公司 Section deformation monitoring method, device and system for zoom imaging and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116929311A (en) * 2023-09-19 2023-10-24 中铁第一勘察设计院集团有限公司 Section deformation monitoring method, device and system for zoom imaging and storage medium
CN116929311B (en) * 2023-09-19 2024-02-02 中铁第一勘察设计院集团有限公司 Section deformation monitoring method, device and system for zoom imaging and storage medium

Similar Documents

Publication Publication Date Title
CN108038902A (en) A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN104376552B (en) A kind of virtual combat method of 3D models and two dimensional image
CN108734744A (en) A kind of remote big field-of-view binocular scaling method based on total powerstation
CN108269302B (en) Three-dimensional human body rapid reconstruction method based on simple measurement garment
CN111145227B (en) Iterative integral registration method for space multi-view point cloud of underground tunnel
CN105354841B (en) A kind of rapid remote sensing image matching method and system
CN104019799A (en) Relative orientation method by using optimization of local parameter to calculate basis matrix
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
WO2020132924A1 (en) Method and device for calibrating external parameters of robot sensor, robot and storage medium
CN113916130A (en) Building position measuring method based on least square method
CN115187730A (en) Structural strain measuring method and device, measuring terminal and storage medium
WO2018233514A1 (en) Pose measurement method and device, and storage medium
CN116310250A (en) Point cloud splicing method and system based on three-dimensional sensor and storage medium
CN114998395A (en) Effective embankment three-dimensional data change detection method and system
CN117252863B (en) Quick detection and analysis method for geographic information abnormal data
CN112132950A (en) Three-dimensional point cloud scene updating method based on crowdsourcing image
CN112767457A (en) Principal component analysis-based plane point cloud matching method and device
CN116563096A (en) Method and device for determining deformation field for image registration and electronic equipment
CN109829939B (en) Method for narrowing search range of multi-view image matching same-name image points
CN108416811B (en) Camera self-calibration method and device
CN115700760A (en) Multi-mode data-based total-space laser radar scattering cross section calculation method
Wang et al. Pose error analysis method based on a single circular feature
CN115326025A (en) Binocular image measuring and predicting method for sea waves
CN108764161B (en) Remote sensing image processing method and device for breaking pathological singularity caused by sparse array based on polar coordinate system
CN114742141A (en) Multi-source information data fusion studying and judging method based on ICP point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination