CN114396877A - Intelligent three-dimensional displacement field and strain field measurement method oriented to material mechanical properties - Google Patents

Intelligent three-dimensional displacement field and strain field measurement method oriented to material mechanical properties Download PDF

Info

Publication number
CN114396877A
CN114396877A CN202111400666.2A CN202111400666A CN114396877A CN 114396877 A CN114396877 A CN 114396877A CN 202111400666 A CN202111400666 A CN 202111400666A CN 114396877 A CN114396877 A CN 114396877A
Authority
CN
China
Prior art keywords
field
dimensional displacement
displacement field
convolution
strain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111400666.2A
Other languages
Chinese (zh)
Other versions
CN114396877B (en
Inventor
冯明驰
李成南
刘景林
王鑫
孙博望
邓程木
岑明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111400666.2A priority Critical patent/CN114396877B/en
Publication of CN114396877A publication Critical patent/CN114396877A/en
Application granted granted Critical
Publication of CN114396877B publication Critical patent/CN114396877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01LMEASURING FORCE, STRESS, TORQUE, WORK, MECHANICAL POWER, MECHANICAL EFFICIENCY, OR FLUID PRESSURE
    • G01L5/00Apparatus for, or methods of, measuring force, work, mechanical power, or torque, specially adapted for specific purposes
    • G01L5/16Apparatus for, or methods of, measuring force, work, mechanical power, or torque, specially adapted for specific purposes for measuring several components of force
    • G01L5/166Apparatus for, or methods of, measuring force, work, mechanical power, or torque, specially adapted for specific purposes for measuring several components of force using photoelectric means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an intelligent three-dimensional displacement field and strain field measurement method for material mechanical property, which comprises the following steps: step 1, randomly spraying speckles on the surface of the material, and simultaneously and continuously acquiring images by using two cameras to record the process that the material deforms under the action of external force. Step 2, constructing a binocular image for recording material deformation as a data set; and 3, establishing a neural network model of the three-dimensional displacement field and the strain field by combining 2D convolution, 3D convolution, transposition convolution, LSTM convolution and a multitask neural network. And 4, training a three-dimensional displacement field and a strain field calculation neural network model by using the training set data. And 5, calculating the three-dimensional displacement field and the strain field of the material. The method extracts the characteristic information in the image through 2D convolution, refines and codes the characteristic information through 3D convolution, and finally calculates the three-dimensional displacement field and the strain field of the material through the combination of the time and space characteristic extraction capability of the convolution LSTM neural network and the transposition convolution.

Description

Intelligent three-dimensional displacement field and strain field measurement method oriented to material mechanical properties
Technical Field
The invention belongs to the field of artificial intelligence and optical measurement, and particularly relates to a three-dimensional displacement field and strain field calculation method for material mechanical property measurement.
Background
Digital Image Correlation (DIC) is a full-field displacement strain measurement technique rapidly popularized in the field of experimental mechanics. It is an optical measuring method which has good balance among universality, usability and metering performance. The optical measurement method is proposed in the 80 th century, and in the past decades, numerous scholars improve the performance, precision, stability and the like of the DIC algorithm, thereby expanding the application range and the usability of the DIC algorithm.
The 2D-DIC uses only a single camera, limiting it to measure deformations only in-plane and not complex shapes and deformations. In order to overcome the limitation of 2D-DIC, a three-dimensional digital image correlation method (3D-DIC) based on the principle of binocular stereo vision has been developed. The 3D-DIC can measure the shape, displacement and strain of a complex object, an optical axis does not need to be vertically measured on the surface of the object before the camera is used for measuring, the early-stage adjustment of equipment is simple, and the environmental sensitivity is low. The constant convergence of 3D-DIC and computer vision has made it widely available.
The traditional 3D-DIC can calculate the displacement field and the strain field of material deformation to a certain extent, but the calculation amount is huge, and real-time measurement is difficult to achieve. When the parallax of left and right images is larger and smaller or materials are greatly deformed, the traditional 3D-DIC algorithm is easy to have the problems of incapability of correctly calculating or low calculation result precision. External conditions such as light also have a great influence on the calculation results of the 3D-DIC. In a word, the application of the traditional 3D-DIC technology is greatly restricted by the problems of large calculation amount, unstable calculation result, strict condition requirement and the like.
A depth learning-based digital image correlation method has been proposed, which obtains a displacement field between two frames of images by inputting two frames of images continuously changed into a convolutional neural network simultaneously and performing a series of convolution and deconvolution operations. The three-dimensional reconstruction based on the deep learning also achieves a good reconstruction effect, but at present, no feasible deep learning model aiming at the calculation of the three-dimensional displacement field and the strain field exists. The deformation of the material under the action of external force is a continuous process, and the deformation quantity at the current moment has a certain relation with the past deformation. Therefore, processing image data at more times in conjunction with a time series can achieve better accuracy in predicting the displacement field than processing image data at only one time. The convolutional LSTM neural network represents strong performance and theoretical advantages in solving the space-time sequence prediction problem by combining the capability of the convolutional neural network to process the space problem with the capability of the LSTM to solve the time sequence problem. And further combining 2D convolution and 3D convolution to extract and refine spatial features, and supplementing high-frequency details and up-sampling by using transposed convolution. Therefore, the intelligent three-dimensional displacement field and strain field calculation method for material mechanical property measurement can better solve the problems of the traditional 3D-DIC.
Application publication No. CN112233104A, a real-time displacement field and strain field detection method, system, device and storage medium, which relates to machine vision technology, comprising the following steps: acquiring a first image and a first configuration parameter; segmenting the first image according to the first configuration parameters to obtain a plurality of first sub-images; extracting a first feature of each first sub-graph; acquiring a second image and a second configuration parameter; segmenting the second image according to the second configuration parameters to obtain a plurality of second sub-images; performing feature search according to the first features of the first subgraphs, determining second positions of the first features of the first subgraphs in corresponding second subgraphs, and obtaining second center coordinates of the first subgraphs according to the second positions; and obtaining a strain field according to the first central coordinate and the second central coordinate of each first subgraph. This scheme can promote the detection efficiency who answers a journey the field greatly. The strain field calculation efficiency is improved by adopting an image segmentation mode, so that the strain field calculation method can be operated in real time. However, the technology still belongs to the traditional two-dimensional displacement field and strain field measurement range, and the displacement on three dimensions cannot be accurately detected. The invention utilizes deep learning to calculate the three-dimensional displacement field and the strain field, adopts the high-performance GPU to accelerate the calculation process, and can also calculate the displacement field and the strain field in real time under the condition of enough GPU calculation force. Compared with a two-dimensional displacement field and strain field calculation method, the method can calculate the displacement and the strain in the depth direction, and can accurately calculate the three-dimensional displacement field and the strain field of the surface of some objects with complex shapes.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. An intelligent three-dimensional displacement field and strain field measurement method for material mechanical properties is provided. The technical scheme of the invention is as follows:
an intelligent three-dimensional displacement field and strain field measurement method for material mechanical properties comprises the following steps:
randomly spraying speckles on the surface of the material, and simultaneously and continuously acquiring images by using two cameras to record the process that the material deforms under the action of external force;
constructing a binocular image for recording material deformation as a data set, wherein the material deformation binocular image data set comprises a training set and a testing set;
establishing a neural network model for simultaneously calculating the three-dimensional displacement field and the strain field of the material by inputting image sequences of a left camera and a right camera by combining 2D convolution, 3D convolution, transposition convolution, LSTM convolution and a multitask neural network;
training a three-dimensional displacement field and a strain field calculation neural network model in a training mode of a multitask neural network by utilizing self-built training set data; and calculating errors through a self-built multitask loss function, and optimizing network parameters by adopting an Adam algorithm.
And calculating a neural network model by using the trained three-dimensional displacement field and strain field, inputting a binocular synchronously acquired image sequence, and calculating the three-dimensional displacement field and strain field of the material.
Further, spraying speckle at random on the surface of the material, using two cameras to continuously collect images simultaneously to record the process that the material deforms under the action of external force, specifically includes:
speckles are randomly and uniformly sprayed on the surface of the material, the color of the speckles has higher contrast with the background of the material, and black and white are respectively adopted;
the axial leads of the left camera and the right camera are kept parallel, the distance between the cameras and the material meets the condition of clear imaging, and the imaging areas of the material in the left camera and the right camera are both in the middle of the image;
the left camera and the right camera adopt an external triggering mode to ensure that images are acquired at the same time, the image acquisition frequency of the cameras is kept constant, and the white balance and the exposure time of the cameras are kept constant in the acquisition process;
the left camera and the right camera continuously acquire images at the same time so as to record the deformation process of the material under the action of external force.
Further, constructing a binocular image for recording material deformation as a data set, wherein the material deformation binocular image data set comprises a training set and a testing set, and specifically comprises:
the material deformation three-dimensional image data set is obtained through computer simulation, before the data set is manufactured, the relative position between a left camera and a right camera and a camera imaging model need to be accurately calibrated, and the position, the size and the shape of a material are determined;
the computer simulation method establishes a simulation model through data including camera parameters and position relations obtained through calibration, wherein speckles on the surface of a material in the simulation model are obtained through simulation of an existing speckle generator, a public image 3D-DIC data set and experimental acquisition; carrying out random three-dimensional deformation on materials in the computer simulation model, and calculating imaging results of the simulation model with speckles in the left camera and the right camera according to the three-dimensional imaging model; obtaining real deformation process image data, a real three-dimensional deformation displacement field and a real three-dimensional surface model;
obtaining an image sequence of three-dimensional deformation of the material by simulating continuous deformation for more than 10 times;
image data acquired or simulated by a binocular camera is used as original data, a three-dimensional displacement field and a strain field generated by deformation are used as corresponding real results in the deformation process, and a material deformation data set is formed and comprises a training set and a testing set.
Furthermore, the neural network model of the three-dimensional displacement field and the strain field is used for feature extraction and refinement through 4 2D convolutional layers and 3D convolutional layers, the convolutional LSTM layer is used for combining space-time features, and 2 branches respectively composed of 4 transposed convolutional layers are used for result calculation of different tasks; the material three-dimensional displacement field and strain field calculation model has two inputs which are respectively image sequences acquired by a left camera and a right camera; the 2D convolutional layers of the material three-dimensional displacement field and the strain field calculation model respectively extract features of binocular images, the results of the feature extraction are combined together and input into the 3D convolutional layers, the output results of the 3D convolutional layers are input as convolutional LSTM layers, and finally the calculation of the three-dimensional displacement field and the strain field is realized through 2 independent branches composed of transposed convolutional layers and weighted.
Furthermore, of the 4 2D convolutional layers, the step size of the convolution operation of the first 3D convolutional layers is 2, which is equivalent to performing 3 times of downsampling while extracting the feature, the output size is 1/8 of the input image, and the ReLU function is used as the activation function after all 3 convolutional layers; the convolution operation step size for the 4 th 2D convolutional layer is 1 and there is no activation function thereafter, the layer outputs the result as a coarse feature of the image.
Furthermore, combining the rough feature tensors of the left and right images to form the input of the 3D convolutional layer, wherein the convolution step length of the 1 st 3D convolutional layer is 2, which is equivalent to performing down-sampling on the features; the convolution step length of the 2 nd and 3 rd 3D convolution layers is 1; after 3D convolution layers, all have a ReLU function as an activation function, and the output result of the layer is the fine feature of the image;
inputting a fine feature tensor obtained by a 3D convolutional layer into a convolutional LSTM layer, wherein the output of the convolutional LSTM layer is respectively sent into 2 branches formed by transposed convolutional layers, and the weights of the transposed convolutional layers of the two branches are independent, so that the fine feature tensor can be used for completing different tasks, and the 2 branches are formed by 4 transposed convolutional layers, which is equivalent to the fine feature tensor which can be up-sampled for 4 times, so that the result with the same size as the original input image is obtained; the branch 1 is used for measuring a three-dimensional displacement field of the material, and the branch 2 is used for measuring a three-dimensional strain field of the material.
Further, the training of the three-dimensional displacement field and strain field computational neural network model by using the training set data specifically includes:
acquiring two input data which are respectively image sequences synchronously acquired by a left camera and a right camera and are used for training a three-dimensional displacement field and a strain field calculation neural network of a material; calculating output data of a neural network by using the three-dimensional displacement field and the strain field of the material, wherein the output data are respectively the three-dimensional displacement field and the strain field of the material deformation;
the calculation results of the three-dimensional displacement field and the strain field adopt an average error function to evaluate the error between the model estimation result and the real result;
calculating a total error by weighting the two errors respectively;
and (4) reversely propagating the total error through a chain rule, and training the network by using an Adam gradient descent optimization algorithm.
Further, the calculation results of the three-dimensional displacement field and the strain field both adopt an average error function for evaluating an error between a model estimation result and a real result, and specifically include:
Figure RE-GDA0003563574560000051
Figure RE-GDA0003563574560000052
wherein (u)e1,ve1,we1) Represents the calculated displacement in the horizontal, vertical and depth directions, (u)g1,vg1,wg1) Indicating true horizontal, vertical, depth directionTrue displacement, (u)e2,ve2,we2) Represents the calculated strain in the horizontal, vertical and depth directions, (u)g2,vg2,wg2) Representing true strains in horizontal, vertical and depth directions, (i, j) representing pixel coordinates, and K and L representing regions where AEE values are calculated;
by weighting the two errors separately, the total error is calculated as follows:
Error=k1×Error1+k2×Error2 (k1+k2=1)。
further, the total error is reversely propagated through a chain rule, and the network is trained by using an Adam gradient descent optimization algorithm, which specifically includes:
first by computing the first moment estimate p and the second moment estimate v of the gradient:
Figure RE-GDA0003563574560000061
Figure RE-GDA0003563574560000062
where l is the number of iterations, θ is the parameter vector, E (θ) is the loss function, β1And beta2The gradient decay factors of the first and second moment estimates are represented, respectively.
According to the p and v obtained by calculation, combining the learning rate alpha and the minimum deviation epsilon to obtain an updated value;
Figure RE-GDA0003563574560000064
and the updated theta is utilized to optimize and learn the parameters of the neural network, so that the accuracy of the network is improved.
Further, the calculating of the neural network model by using the trained three-dimensional displacement field and strain field, inputting the image sequence acquired by binocular synchronization, and calculating the three-dimensional displacement field and strain field of the material specifically include:
adjusting the acquired image sequence into a format the same as that of a data set, and inputting the image sequence into a material three-dimensional displacement field and strain field calculation neural network model;
extracting coarse features from the input image through 4 2D convolutional layers, combining two input coarse features and inputting the combined two coarse features into three 3D convolutional layers to obtain refined features;
inputting the thinned features into a convolution LSTM layer, and then realizing the calculation of a three-dimensional displacement field and a strain field by transposing the convolution layer; the convolutional LSTM layer can process image sequences, so the neural network model can continue to perform three-dimensional displacement field and strain field calculations in conjunction with historical data.
The invention has the following advantages and beneficial effects:
according to the method, a deformation process of a recording material is shot by a binocular camera, a material deformation binocular image data set is constructed, and a neural network capable of calculating a three-dimensional displacement field strain field of the material simultaneously is constructed by combining 2D convolution, 3D convolution, transposition convolution, convolution LSTM and a multitask neural network; training the neural network model by using the data of the training set; and finally, calculating the binocular image of the material deformation under the real condition acquired by the camera by utilizing the trained three-dimensional displacement field and strain field calculation neural network model to obtain the global three-dimensional displacement field and strain field of the material. The method not only can accurately measure the global three-dimensional displacement field and the strain field of the material, but also can improve the calculation precision and efficiency of the three-dimensional displacement field and the strain field compared with the traditional 3D-DIC model by benefiting from the strong space-time characteristic extraction and processing capability of the convolution LSTM neural network.
The innovation points of the invention are as follows:
1. compared with the traditional displacement field and strain field calculation method, the method has the advantages that the calculation of the three-dimensional displacement field and the three-dimensional strain field is realized by innovatively utilizing a deep learning mode, and a brand-new calculation idea and method are provided for the calculation of the three-dimensional displacement field and the three-dimensional strain field of the material. The method can effectively solve the problem that the traditional method cannot accurately calculate the three-dimensional displacement field under large parallax and large deformation.
2. The invention generates the data set for deep learning training by using a mode of generating speckles and deformation through computer simulation, ensures the accuracy and precision of the data set, and provides reliable data for training a high-precision neural network model.
3. The essence of the deformation process of the material is that the spatial characteristics change along with the change of time, and the convolution LSTM neural network used in the invention can well combine the time and the spatial characteristics to extract and process the characteristics. The 3D convolution layer used in the invention can well extract the characteristics in the depth direction, and plays an important role in the calculation of the three-dimensional displacement field and the strain field.
4. The invention uses a multitask neural network, defines 2 corresponding loss functions for calculating the error aiming at the displacement field calculation and the strain field calculation, and obtains the total error in a weighting mode. Compared with the traditional mode of calculating the displacement field first and then calculating the strain field, the method provided by the invention can be used for simultaneously calculating the displacement field and the strain field on the GPU, and the calculation speed is greatly increased.
Drawings
FIG. 1 is a flow chart illustrating an implementation of a method for intelligent three-dimensional displacement field and strain field calculation for material mechanical property measurement according to an exemplary embodiment of the present invention;
FIG. 2 is a flow chart for calculating three-dimensional displacement and strain fields using the present invention;
FIG. 3 is a schematic diagram illustrating speckle on a surface of a material, according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a process of deforming a material according to an exemplary embodiment;
FIG. 5 is a model schematic of a three-dimensional displacement field and strain field computational neural network shown in accordance with an exemplary embodiment;
FIG. 6 is a diagram illustrating the results of a three-dimensional displacement field and strain field calculation in accordance with an exemplary embodiment;
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the flow chart of the method for calculating the intelligent three-dimensional displacement field and the strain field for measuring the mechanical properties of the material is shown in figure 1 and specifically comprises the following 5 steps:
step 1, randomly spraying speckles on the surface of the material, and simultaneously and continuously acquiring images by using two cameras to record the process that the material deforms under the action of external force.
And 2, constructing a binocular image for recording material deformation as a data set, wherein the material deformation binocular image data set comprises a training set and a testing set.
And 3, establishing a neural network model for simultaneously calculating the three-dimensional displacement field and the strain field of the material by inputting image sequences of a left camera and a right camera by combining 2D convolution, 3D convolution, transposition convolution LSTM and a multitask neural network.
And 4, training a three-dimensional displacement field and a strain field calculation neural network model by using the training set data.
And 5, calculating a neural network model by using the trained three-dimensional displacement field and strain field, inputting a binocular synchronously acquired image sequence, and calculating the three-dimensional displacement field and strain field of the material.
The flow of the specific steps for calculating the three-dimensional displacement field and strain field by means of the model is shown in fig. 2.
As a possible implementation manner of this embodiment, step 1 is to spray random spraying speckles on the surface of the material as shown in fig. 3, and includes the following steps:
and 11, randomly and uniformly spraying speckles on the surface of the material, wherein the color of the speckles has higher contrast with the background of the material, and the speckles are generally black and white respectively.
And 12, keeping the axial leads of the left camera and the right camera parallel, ensuring that the distance between the cameras and the material meets the requirement of clear imaging, and ensuring that the imaging areas of the material in the left camera and the right camera are both in the middle of the image.
And step 13, the left camera and the right camera adopt an external triggering mode to ensure that the images are acquired at the same time, and the image acquisition frequency of the cameras is kept constant. The white balance and exposure time of the camera should be kept constant during the acquisition process.
And step 14, continuously acquiring images by the left camera and the right camera at the same time so as to record the deformation process of the material under the action of external force. As a possible implementation manner of this embodiment, the step 2 data set generates an image as shown in fig. 4, and includes the following steps:
and step 21, obtaining the material deformation stereo image data set through computer simulation. Before the data set is manufactured, the relative position between the left camera and the right camera and the camera imaging model need to be accurately calibrated, and the position, the size and the shape of the material need to be determined.
And step 22, the computer simulation method is used for establishing a simulation model through data such as camera parameters, position relations and the like obtained by calibration, and speckles on the surface of the material in the model can be obtained through simulation of an existing speckle generator, a public image 3D-DIC data set and experimental acquisition. And carrying out random three-dimensional deformation on the material in the computer simulation model, and calculating imaging results of the material in the left camera and the right camera according to the three-dimensional imaging model. Therefore, accurate deformation process image data, a real three-dimensional deformation displacement field and a real three-dimensional surface model can be obtained.
And step 23, obtaining an image sequence of three-dimensional deformation of the material by simulating continuous deformation for more than 10 times.
And 24, using image data acquired or simulated by the binocular camera as original data, and using a three-dimensional displacement field and a strain field generated by deformation as corresponding real results in the deformation process to form a material deformation data set, wherein the data set comprises a training set and a testing set.
As a possible implementation manner of this embodiment, the material three-dimensional displacement field and strain field calculation neural network model constructed in step 3 is shown in fig. 5, and includes the following steps:
and step 31, combining the 2D convolution, the 3D convolution, the transposition convolution, the convolution LSTM and the multitask neural network to construct a neural network model capable of simultaneously calculating the three-dimensional displacement field and the strain field of the material surface. The model is used for feature extraction and refinement through 4 2D convolutional layers and 3D convolutional layers, a convolutional LSTM layer is used for combining space-time features, and 2 branches respectively consisting of 4 transposed convolutional layers are used for result calculation of different tasks. Wherein the convolution operation step size for the 2D convolutional layer and the 3D convolutional layer may be 1 or 2.
Step 32, the material three-dimensional displacement field and the strain field calculation model have two inputs, which are respectively the image sequences acquired by the left and right cameras. The image sequence is synchronously acquired by a left camera and a right camera, and the deformation process of the material under the action of an external force is recorded.
And step 33, respectively extracting the characteristics of the binocular images by the 2D convolutional layers of the material three-dimensional displacement field and the strain field calculation model, combining the results of the characteristic extraction together and inputting the result into the 3D convolutional layers, inputting the output result of the 3D convolutional layers as convolutional LSTM layers, and finally realizing the calculation of the three-dimensional displacement field and the strain field through 2 independent branches with weights formed by the transposed convolutional layers.
In step 34, the convolution operation of the first 3 2D convolutional layers has a step size of 2, which is equivalent to performing 3 times of downsampling while extracting features, the output size is 1/8 of the input image, and the ReLU function is used as the activation function after all 3 convolutional layers. The convolution operation step size for the 4 th 2D convolutional layer is 1 and there is no activation function thereafter. The result output by this layer is the coarse features of the image.
Step 35, the coarse feature tensors of the left and right images are combined to form the input of the 3D convolutional layer. The convolution step size of the 1 st 3D convolutional layer is 2, which corresponds to one downsampling of the feature. The convolution step size of the 2 nd and 3 rd 3D convolutional layers is 1. After 3D convolutional layers, there is a ReLU function as the activation function. The result of this layer output is a fine feature of the image.
Step 36, the fine feature tensor obtained by the 3D convolutional layer is input into the convolutional LSTM layer, and the output of the convolutional LSTM layer is sent to 2 branches consisting of transposed convolutional layers, respectively. The transposed convolutional layer weights of the two branches are independent of each other and can therefore be used to accomplish different tasks. Each of the 2 branches consists of 4 transposed convolutional layers, which is equivalent to being able to upsample the fine feature tensor 4 times, thus obtaining the result with the same size as the original input image. The branch 1 is used for measuring a three-dimensional displacement field of the material, and the branch 2 is used for measuring a three-dimensional strain field of the material. As a possible implementation manner of this embodiment, the step 4 includes the following steps:
and step 41, two data inputs are used for training the material three-dimensional displacement field and the strain field calculation neural network, wherein the two data inputs are respectively image sequences synchronously acquired by a left camera and a right camera, and the input data formats are [ n, h, w, c ], wherein n is the number of data frames, h is the height of an input image, w is the width of the input image, and c is the number of channels. Since the input image is a grayscale map, c is 1.
And 42, calculating output data of the neural network by the material three-dimensional displacement field and the strain field, wherein the output data are the three-dimensional displacement field and the strain field of the material deformation respectively. The data formats are [ n, h, w, c ], where n is the number of data frames, h is the height of the output image, w is the width of the output image, and c is the number of channels. For three-dimensional displacement and strain fields, c is 3.
Step 43, the calculation results of the three-dimensional displacement field and the strain field both adopt the following average error function for evaluating the error between the model estimation result and the real result.
Figure RE-GDA0003563574560000111
Figure RE-GDA0003563574560000112
Wherein (u)e1,ve1,we1) Represents the calculated displacement in the horizontal, vertical and depth directions, (u)g1,vg1,wg1) Representing true displacements in the horizontal, vertical, depth directions, (u)e2,ve2,we2) Represents the calculated strain in the horizontal, vertical and depth directions, (u)g2,vg2,wg2) Representing the true strain in the horizontal, vertical, depth direction, (i, j) representing the pixel coordinates, and K and L representing the region where the AEE value is calculated.
Step 44, the two errors are weighted respectively, and the total error is calculated as follows:
Error=k1×Error1+k2×Error2 (k1+k2=1)
step 45, reversely propagating the total error through a chain rule, and training the network by using an Adam gradient descent optimization algorithm:
first by computing the first moment estimate p and the second moment estimate v of the gradient:
Figure RE-GDA0003563574560000113
Figure RE-GDA0003563574560000121
where l is the number of iterations, θ is the parameter vector, E (θ) is the loss function, β1And beta2The gradient decay factors of the first and second moment estimates are represented, respectively.
And obtaining an updated value theta according to the p and v obtained by calculation and by combining the learning rate alpha and the minimum deviation epsilon.
Figure RE-GDA0003563574560000123
And the updated theta is utilized to optimize and learn the parameters of the neural network, so that the accuracy of the network is improved.
As a possible implementation manner of this embodiment, the output result of the model in step 5 as shown in fig. 6 includes the following steps:
and 51, adjusting the image sequence acquired in the step 2 into the format same as that of the data set, and inputting the image sequence into the material three-dimensional displacement field and strain field calculation neural network model.
Step 52, the input image will extract the coarse features through 4 2D convolutional layers, and then combine the two input coarse features and input them into three 3D convolutional layers to obtain the refined features.
And 53, inputting the thinned features into a convolution LSTM layer, and then realizing the calculation of the three-dimensional displacement field and the strain field by transposing the convolution layer.
The convolved LSTM layer can process the image sequence, so the neural network model can continue to perform three-dimensional displacement field and strain field calculations in conjunction with historical data, step 54.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (10)

1. An intelligent three-dimensional displacement field and strain field measurement method for material mechanical properties is characterized by comprising the following steps:
randomly spraying speckles on the surface of the material, and simultaneously and continuously acquiring images by using two cameras to record the process that the material deforms under the action of external force;
constructing a binocular image for recording material deformation as a data set, wherein the material deformation binocular image data set comprises a training set and a testing set;
establishing a neural network model for simultaneously calculating the three-dimensional displacement field and the strain field of the material by inputting image sequences of a left camera and a right camera by combining 2D convolution, 3D convolution, transposition convolution, LSTM convolution and a multitask neural network;
training a three-dimensional displacement field and a strain field calculation neural network model in a training mode of a multitask neural network by utilizing self-built training set data;
and calculating a neural network model by using the trained three-dimensional displacement field and strain field, inputting a binocular synchronously acquired image sequence, and calculating the three-dimensional displacement field and strain field of the material.
2. The method for measuring the three-dimensional displacement field and the strain field of the material mechanical property oriented to the material mechanical property of claim 1, wherein speckles are sprayed on the surface of the material randomly, and two cameras are used for simultaneously and continuously acquiring images to record the deformation process of the material under the action of external force, and the method specifically comprises the following steps:
speckles are randomly and uniformly sprayed on the surface of the material, the color of the speckles has higher contrast with the background of the material, and black and white are respectively adopted;
the axial leads of the left camera and the right camera are kept parallel, the distance between the cameras and the material meets the condition of clear imaging, and the imaging areas of the material in the left camera and the right camera are both in the middle of the image;
the left camera and the right camera adopt an external triggering mode to ensure that images are acquired at the same time, the image acquisition frequency of the cameras is kept constant, and the white balance and the exposure time of the cameras are kept constant in the acquisition process;
the left camera and the right camera continuously acquire images at the same time so as to record the deformation process of the material under the action of external force.
3. The method for measuring the three-dimensional displacement field and the strain field of the material mechanical property oriented intelligent system according to claim 1, wherein the binocular image for recording the material deformation is constructed as a data set, the binocular image data set for the material deformation comprises a training set and a testing set, and specifically comprises the following steps:
the material deformation three-dimensional image data set is obtained through computer simulation, before the data set is manufactured, the relative position between a left camera and a right camera and a camera imaging model need to be accurately calibrated, and the position, the size and the shape of a material are determined;
the computer simulation method establishes a simulation model through data including camera parameters and position relations obtained through calibration, wherein speckles on the surface of a material in the simulation model are obtained through simulation of an existing speckle generator, a public image 3D-DIC data set and experimental acquisition; carrying out random three-dimensional deformation on materials in the computer simulation model, and calculating imaging results of the simulation model with speckles in the left camera and the right camera according to the three-dimensional imaging model; obtaining real deformation process image data, a real three-dimensional deformation displacement field and a real three-dimensional surface model;
obtaining an image sequence of three-dimensional deformation of the material by simulating continuous deformation for more than 10 times;
image data acquired or simulated by a binocular camera is used as original data, a three-dimensional displacement field and a strain field generated by deformation are used as corresponding real results in the deformation process, and a material deformation data set is formed and comprises a training set and a testing set.
4. The method for measuring the intelligent three-dimensional displacement field and the strain field oriented to the mechanical properties of the material as claimed in claim 1, wherein the neural network model of the three-dimensional displacement field and the strain field is used for feature extraction and refinement through 4 2D convolutional layers and 3D convolutional layers, the convolutional LSTM layer is used for combining space-time features, and 2 branches respectively composed of 4 transposed convolutional layers are used for result calculation of different tasks; the material three-dimensional displacement field and strain field calculation model has two inputs which are respectively image sequences acquired by a left camera and a right camera; the 2D convolutional layers of the material three-dimensional displacement field and the strain field calculation model respectively extract features of binocular images, the results of the feature extraction are combined together and input into the 3D convolutional layers, the output results of the 3D convolutional layers are input as convolutional LSTM layers, and finally the calculation of the three-dimensional displacement field and the strain field is realized through 2 independent branches composed of transposed convolutional layers and weighted.
5. The method for measuring the three-dimensional displacement field and the strain field of the material mechanics performance oriented intelligence of claim 4, wherein the step length of the convolution operation of the first 3 2D convolution layers in the 4 2D convolution layers is 2, which is equivalent to that 3 times of down-sampling is carried out while the features are extracted, the size of the output is 1/8 of the input image, and the ReLU function is used as an activation function after the 3 convolution layers; the convolution operation step size for the 4 th 2D convolutional layer is 1 and there is no activation function thereafter, the layer outputs the result as a coarse feature of the image.
6. The method for measuring the three-dimensional displacement field and the strain field based on the material mechanics performance of claim 4, wherein the coarse feature tensors of the left image and the right image are combined together to form the input of a 3D convolutional layer, the convolution step of the 1 st 3D convolutional layer is 2, which is equivalent to performing down-sampling on the features; the convolution step length of the 2 nd and 3 rd 3D convolution layers is 1; after 3D convolution layers, all have a ReLU function as an activation function, and the output result of the layer is the fine feature of the image;
inputting a fine feature tensor obtained by a 3D convolutional layer into a convolutional LSTM layer, wherein the output of the convolutional LSTM layer is respectively sent into 2 branches formed by transposed convolutional layers, and the weights of the transposed convolutional layers of the two branches are independent, so that the fine feature tensor can be used for completing different tasks, and the 2 branches are formed by 4 transposed convolutional layers, which is equivalent to the fine feature tensor which can be up-sampled for 4 times, so that the result with the same size as the original input image is obtained; the branch 1 is used for measuring a three-dimensional displacement field of the material, and the branch 2 is used for measuring a three-dimensional strain field of the material.
7. The method for measuring the three-dimensional displacement field and the strain field of the material mechanics performance oriented intelligence according to claim 1, wherein the training of the three-dimensional displacement field and the strain field by using the training set data to calculate the neural network model specifically comprises:
acquiring two input data which are respectively image sequences synchronously acquired by a left camera and a right camera and are used for training a three-dimensional displacement field and a strain field calculation neural network of a material; calculating output data of a neural network by using the three-dimensional displacement field and the strain field of the material, wherein the output data are respectively the three-dimensional displacement field and the strain field of the material deformation;
the calculation results of the three-dimensional displacement field and the strain field adopt an average error function to evaluate the error between the model estimation result and the real result;
calculating a total error by weighting the two errors respectively;
and (4) reversely propagating the total error through a chain rule, and training the network by using an Adam gradient descent optimization algorithm.
8. The method for measuring the three-dimensional displacement field and the strain field of the material mechanics performance oriented intelligence of claim 7, wherein the calculation results of the three-dimensional displacement field and the strain field both adopt an average error function for evaluating the error between the model estimation result and the real result, and specifically comprises:
Figure RE-FDA0003563574550000031
Figure RE-FDA0003563574550000041
wherein (u)e1,ve1,we1) Represents the calculated displacement in the horizontal, vertical and depth directions, (u)g1,vg1,wg1) Representing true displacements in the horizontal, vertical, depth directions, (u)e2,ve2,we2) Represents the calculated strain in the horizontal, vertical and depth directions, (u)g2,vg2,wg2) Representing true strains in horizontal, vertical and depth directions, (i, j) representing pixel coordinates, and K and L representing regions where AEE values are calculated;
by weighting the two errors separately, the total error is calculated as follows:
Error=k1×Error1+k2×Error2 (k1+k2=1)。
9. the method for measuring the three-dimensional displacement field and the strain field based on the material mechanics performance of claim 8, wherein the total error is reversely propagated through a chain rule, and an Adam gradient descent optimization algorithm is used for training a network, and specifically comprises the following steps:
first by computing the first moment estimate p and the second moment estimate v of the gradient:
Figure RE-FDA0003563574550000042
Figure RE-FDA0003563574550000043
where l is the number of iterations, θ is the parameter vector, E (θ) is the loss function, β1And beta2Gradient attenuation factors representing first and second order moment estimates, respectively;
according to the p and v obtained by calculation, combining the learning rate alpha and the minimum deviation epsilon to obtain an updated value;
Figure RE-FDA0003563574550000044
and the updated theta is utilized to optimize and learn the parameters of the neural network, so that the accuracy of the network is improved.
10. The method for measuring the intelligent three-dimensional displacement field and the strain field oriented to the mechanical properties of the material as claimed in claim 9, wherein the method for calculating the neural network model by using the trained three-dimensional displacement field and the strain field, inputting the image sequence acquired by binocular synchronization, and calculating the three-dimensional displacement field and the strain field of the material specifically comprises:
adjusting the acquired image sequence into a format the same as that of a data set, and inputting the image sequence into a material three-dimensional displacement field and strain field calculation neural network model;
extracting coarse features from the input image through 4 2D convolutional layers, combining two input coarse features and inputting the combined two coarse features into three 3D convolutional layers to obtain refined features;
inputting the thinned features into a convolution LSTM layer, and then realizing the calculation of a three-dimensional displacement field and a strain field by transposing the convolution layer; the convolutional LSTM layer can process image sequences, so the neural network model can continue to perform three-dimensional displacement field and strain field calculations in conjunction with historical data.
CN202111400666.2A 2021-11-19 2021-11-19 Intelligent three-dimensional displacement field and strain field measurement method for mechanical properties of materials Active CN114396877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111400666.2A CN114396877B (en) 2021-11-19 2021-11-19 Intelligent three-dimensional displacement field and strain field measurement method for mechanical properties of materials

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111400666.2A CN114396877B (en) 2021-11-19 2021-11-19 Intelligent three-dimensional displacement field and strain field measurement method for mechanical properties of materials

Publications (2)

Publication Number Publication Date
CN114396877A true CN114396877A (en) 2022-04-26
CN114396877B CN114396877B (en) 2023-09-26

Family

ID=81225848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111400666.2A Active CN114396877B (en) 2021-11-19 2021-11-19 Intelligent three-dimensional displacement field and strain field measurement method for mechanical properties of materials

Country Status (1)

Country Link
CN (1) CN114396877B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310217A (en) * 2023-03-15 2023-06-23 精创石溪科技(成都)有限公司 Method for dynamically evaluating muscles in human body movement based on three-dimensional digital image correlation method
CN116518868A (en) * 2023-07-05 2023-08-01 深圳市海塞姆科技有限公司 Deformation measurement method, device, equipment and storage medium based on artificial intelligence

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6101450A (en) * 1997-06-03 2000-08-08 The Trustees Of Columbia University In The City Of New York Stress analysis using a defect-free four-node finite element technique
DE102006060584A1 (en) * 2006-12-19 2008-06-26 Bundesrepublik Deutschland, vertr. d. d. Bundesministerium für Wirtschaft und Technologie, dieses vertr. d. d. Präsidenten der Physikalisch-Technischen Bundesanstalt Method for measuring displacement and geometry of microstructures, involves focusing incident light on examining structure and light coming from structure is detected
CN101813569A (en) * 2010-03-31 2010-08-25 东南大学 Health monitoring method for identifying damaged cable and support displacement based on strain monitoring
CN103308027A (en) * 2012-03-12 2013-09-18 波音公司 A method and apparatus for identifying structural deformation
CN103558243A (en) * 2013-11-19 2014-02-05 北京航空航天大学 Optical method based high-speed aircraft hot surface full-field deformation measuring device
CN105545595A (en) * 2015-12-11 2016-05-04 重庆邮电大学 Wind turbine feedback linearization power control method based on radial basis function neural network
KR20170069138A (en) * 2016-10-12 2017-06-20 연세대학교 산학협력단 Ann-based sustainable strain sensing model system, structural health assessment system and method
CN108918271A (en) * 2018-09-11 2018-11-30 苏州大学 Young's modulus measurement method based on microoptic digital speckle method
CN109919905A (en) * 2019-01-08 2019-06-21 浙江大学 A kind of Infrared Non-destructive Testing method based on deep learning
CN110472637A (en) * 2019-07-29 2019-11-19 天津大学 Deep learning variable density low quality electronic speckle stripe direction extracting method
CN110738697A (en) * 2019-10-10 2020-01-31 福州大学 Monocular depth estimation method based on deep learning
KR102060169B1 (en) * 2018-09-28 2020-02-12 이영우 Apparatus for measuring displacement of structures using the plurality of cameras and method thereof
US20200134366A1 (en) * 2017-06-16 2020-04-30 Hangzhou Hikvision Digital Technology Co., Ltd. Target recognition method and apparatus for a deformed image
CN111160270A (en) * 2019-12-31 2020-05-15 中铁大桥科学研究院有限公司 Bridge monitoring method based on intelligent video identification
CN111862201A (en) * 2020-07-17 2020-10-30 北京航空航天大学 Deep learning-based spatial non-cooperative target relative pose estimation method
WO2020231683A1 (en) * 2019-05-15 2020-11-19 Corning Incorporated Edge strength testing methods and apparatus
CN112233104A (en) * 2020-10-27 2021-01-15 广州大学 Real-time displacement field and strain field detection method, system, device and storage medium
CN113188453A (en) * 2021-04-30 2021-07-30 东北电力大学 Speckle generating device for film structure non-contact displacement and strain measurement
US20210241106A1 (en) * 2020-01-30 2021-08-05 Dassault Systemes Deformations basis learning
CN113538473A (en) * 2021-07-08 2021-10-22 南京航空航天大学 Random grid and special-shaped subarea division method for cracks in digital image correlation
WO2022191301A1 (en) * 2021-03-08 2022-09-15 公立大学法人大阪 Calculation method for heating plan, program, recording medium, device, deformation method, plate deformation device, and production method for deformed plate

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6101450A (en) * 1997-06-03 2000-08-08 The Trustees Of Columbia University In The City Of New York Stress analysis using a defect-free four-node finite element technique
DE102006060584A1 (en) * 2006-12-19 2008-06-26 Bundesrepublik Deutschland, vertr. d. d. Bundesministerium für Wirtschaft und Technologie, dieses vertr. d. d. Präsidenten der Physikalisch-Technischen Bundesanstalt Method for measuring displacement and geometry of microstructures, involves focusing incident light on examining structure and light coming from structure is detected
CN101813569A (en) * 2010-03-31 2010-08-25 东南大学 Health monitoring method for identifying damaged cable and support displacement based on strain monitoring
CN110260837A (en) * 2012-03-12 2019-09-20 波音公司 A kind of method and apparatus for determining malformation
CN103308027A (en) * 2012-03-12 2013-09-18 波音公司 A method and apparatus for identifying structural deformation
CN103558243A (en) * 2013-11-19 2014-02-05 北京航空航天大学 Optical method based high-speed aircraft hot surface full-field deformation measuring device
CN105545595A (en) * 2015-12-11 2016-05-04 重庆邮电大学 Wind turbine feedback linearization power control method based on radial basis function neural network
KR20170069138A (en) * 2016-10-12 2017-06-20 연세대학교 산학협력단 Ann-based sustainable strain sensing model system, structural health assessment system and method
US20200134366A1 (en) * 2017-06-16 2020-04-30 Hangzhou Hikvision Digital Technology Co., Ltd. Target recognition method and apparatus for a deformed image
CN108918271A (en) * 2018-09-11 2018-11-30 苏州大学 Young's modulus measurement method based on microoptic digital speckle method
KR102060169B1 (en) * 2018-09-28 2020-02-12 이영우 Apparatus for measuring displacement of structures using the plurality of cameras and method thereof
CN109919905A (en) * 2019-01-08 2019-06-21 浙江大学 A kind of Infrared Non-destructive Testing method based on deep learning
WO2020231683A1 (en) * 2019-05-15 2020-11-19 Corning Incorporated Edge strength testing methods and apparatus
CN110472637A (en) * 2019-07-29 2019-11-19 天津大学 Deep learning variable density low quality electronic speckle stripe direction extracting method
CN110738697A (en) * 2019-10-10 2020-01-31 福州大学 Monocular depth estimation method based on deep learning
CN111160270A (en) * 2019-12-31 2020-05-15 中铁大桥科学研究院有限公司 Bridge monitoring method based on intelligent video identification
US20210241106A1 (en) * 2020-01-30 2021-08-05 Dassault Systemes Deformations basis learning
CN111862201A (en) * 2020-07-17 2020-10-30 北京航空航天大学 Deep learning-based spatial non-cooperative target relative pose estimation method
CN112233104A (en) * 2020-10-27 2021-01-15 广州大学 Real-time displacement field and strain field detection method, system, device and storage medium
WO2022191301A1 (en) * 2021-03-08 2022-09-15 公立大学法人大阪 Calculation method for heating plan, program, recording medium, device, deformation method, plate deformation device, and production method for deformed plate
CN113188453A (en) * 2021-04-30 2021-07-30 东北电力大学 Speckle generating device for film structure non-contact displacement and strain measurement
CN113538473A (en) * 2021-07-08 2021-10-22 南京航空航天大学 Random grid and special-shaped subarea division method for cracks in digital image correlation

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
FARSHID FARNOOD AHMADI: "Integration of industrial videogrammetry and artificial neural networks for monitoring and modeling the deformation or displacement of structures", NEURAL COMPUT & APPLIC, pages 3709 - 3716 *
FENG MINGCHI 等: "Research on the fusion method for vehicle shape-position based on binocular camera and lidar", 2021 6TH INTERNATIONAL SYMPOSIUM ON COMPUTER AND INFORMATION PROCESSING TECHNOLOGY, pages 419 - 423 *
JINGHUA XU 等: "Thermal Deformation Defect Prediction for Layered Printing Using Convolutional Generative Adversarial Network", APPLIED SCIENCES, pages 1 - 20 *
冯明驰 等: "基于深度学习的散斑图像大变形测量方法", 光学学报, pages 1412001 - 1 *
张丹;张卫红;: "基于铸件热应力及变形的人工神经网络和遗传算法优化方法", 航空学报, no. 04, pages 697 - 703 *
陈忠 等: "基于双目立体视觉与数字散斑图像相关的全场振动测量", 振动与冲击, pages 121 - 126 *
黄举 等: "基于卷积神经网络的散斑图像位移场测量方法", 光学学报, pages 2012002 - 1 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310217A (en) * 2023-03-15 2023-06-23 精创石溪科技(成都)有限公司 Method for dynamically evaluating muscles in human body movement based on three-dimensional digital image correlation method
CN116310217B (en) * 2023-03-15 2024-01-30 精创石溪科技(成都)有限公司 Method for dynamically evaluating muscles in human body movement based on three-dimensional digital image correlation method
CN116518868A (en) * 2023-07-05 2023-08-01 深圳市海塞姆科技有限公司 Deformation measurement method, device, equipment and storage medium based on artificial intelligence
CN116518868B (en) * 2023-07-05 2023-08-25 深圳市海塞姆科技有限公司 Deformation measurement method, device, equipment and storage medium based on artificial intelligence

Also Published As

Publication number Publication date
CN114396877B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
US20200265597A1 (en) Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks
CN110009674B (en) Monocular image depth of field real-time calculation method based on unsupervised depth learning
CN109669049B (en) Particle image velocity measurement method based on convolutional neural network
CN104869387B (en) Method for acquiring binocular image maximum parallax based on optical flow method
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
CN109165660A (en) A kind of obvious object detection method based on convolutional neural networks
CN112001960A (en) Monocular image depth estimation method based on multi-scale residual error pyramid attention network model
CN114396877B (en) Intelligent three-dimensional displacement field and strain field measurement method for mechanical properties of materials
CN110910437B (en) Depth prediction method for complex indoor scene
CN109887021A (en) Based on the random walk solid matching method across scale
CN109461177B (en) Monocular image depth prediction method based on neural network
CN110176023A (en) A kind of light stream estimation method based on pyramid structure
CN115830406A (en) Rapid light field depth estimation method based on multiple parallax scales
CN116222577B (en) Closed loop detection method, training method, system, electronic equipment and storage medium
CN112184731B (en) Multi-view stereoscopic depth estimation method based on contrast training
CN114372523A (en) Binocular matching uncertainty estimation method based on evidence deep learning
CN116310219A (en) Three-dimensional foot shape generation method based on conditional diffusion model
CN112907557A (en) Road detection method, road detection device, computing equipment and storage medium
CN116468769A (en) Depth information estimation method based on image
CN114066959A (en) Single-stripe image depth estimation method based on Transformer
US20230177771A1 (en) Method for performing volumetric reconstruction
CN114065650A (en) Deep learning-based multi-scale strain field measurement tracking method for crack tip of material
CN116385520A (en) Wear surface topography luminosity three-dimensional reconstruction method and system integrating full light source images
CN114485417B (en) Structural vibration displacement identification method and system
CN115601423A (en) Edge enhancement-based round hole pose measurement method in binocular vision scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant