CN114065650A - Deep learning-based multi-scale strain field measurement tracking method for crack tip of material - Google Patents

Deep learning-based multi-scale strain field measurement tracking method for crack tip of material Download PDF

Info

Publication number
CN114065650A
CN114065650A CN202111420167.XA CN202111420167A CN114065650A CN 114065650 A CN114065650 A CN 114065650A CN 202111420167 A CN202111420167 A CN 202111420167A CN 114065650 A CN114065650 A CN 114065650A
Authority
CN
China
Prior art keywords
strain field
focus
dimensional strain
dimensional
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111420167.XA
Other languages
Chinese (zh)
Inventor
冯明驰
李成南
王鑫
孙博望
邓程木
刘景林
岑明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111420167.XA priority Critical patent/CN114065650A/en
Publication of CN114065650A publication Critical patent/CN114065650A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a deep learning-based multi-scale strain field measurement tracking method for a material crack tip, which comprises the following steps of: random spraying speckles are sprayed on the surface of the material, an external force is applied to the material to enable the material to deform and generate cracks, and multi-scale information of material deformation is acquired by using camera combinations with different focal lengths. And constructing a multi-scale material deformation image sequence as a data set. And measuring the neural network model of the global three-dimensional strain field of the material by combining convolution, transposed convolution and convolution LSTM neural networks. And training the material three-dimensional strain field measurement neural network model by using training set data. The trained three-dimensional strain field of the material is used for measuring a neural network model, multi-scale images collected by a camera are input, the three-dimensional strain field of the material is measured in real time, the crack area of the material is calculated through the strain field, and then the binocular tele-camera is moved to track the tip of the crack in real time. The movable long-focus binocular camera provided by the invention tracks the crack area.

Description

Deep learning-based multi-scale strain field measurement tracking method for crack tip of material
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image restoration technology.
Background
Digital Image Correlation (DIC) is a full field strain measurement technique rapidly popularized in the field of experimental mechanics. It is an optical measuring method which has good balance among universality, usability and metering performance. The optical measurement method is proposed in the 80 th century, and in the past decades, numerous scholars improve the performance, precision, stability and the like of the DIC algorithm, thereby expanding the application range and the usability of the DIC algorithm.
The 2D-DIC uses only a single camera, limiting it to measure deformations only in-plane and not complex shapes and deformations. In order to overcome the limitation of 2D-DIC, a three-dimensional digital image correlation method (3D-DIC) based on the principle of binocular stereo vision has been developed. The 3D-DIC can measure the appearance, strain and strain of a complex object, an optical axis does not need to be vertically measured on the surface of the object before the measurement of a camera, the early-stage adjustment of equipment is simple, and the environmental sensitivity is low. The constant convergence of 3D-DIC and computer vision has made it widely available.
The traditional 3D-DIC can calculate the displacement field and the strain field of material deformation to a certain extent, but the calculation amount is huge, and real-time measurement is difficult to achieve. When the parallax of left and right images is larger and smaller or materials are greatly deformed, the traditional 3D-DIC algorithm is easy to have the problems of incapability of correctly calculating or low calculation result precision. External conditions such as light also have a great influence on the calculation results of the 3D-DIC. In a word, the application of the traditional 3D-DIC technology is greatly restricted by the problems of large calculation amount, unstable calculation result, strict condition requirement and the like.
When the size of the measured object is large, the focal length of the lens or the distance between the camera and the measured object can be adjusted, so that effective pixels can be reduced, the details of the measured object cannot be accurately reflected, and the accuracy of the measurement result cannot meet the requirement. In a mechanical experiment, the crack propagation area is generally larger and larger, and the crack itself is relatively tiny, so that it is difficult for a single short-focus camera to accurately acquire information of the crack area. The dynamic multi-camera system with different focal lengths can obtain material information with different scales, and the long-focus camera is used for tracking the crack area to obtain more detailed information of the crack area.
A depth learning-based digital image correlation method has been proposed, which obtains a strain field between two frames of images by inputting two frames of images continuously changed into a convolutional neural network simultaneously, and performing a series of convolution and deconvolution operations. The three-dimensional reconstruction based on the deep learning also achieves a good reconstruction effect, but at present, no feasible deep learning model aiming at the three-dimensional strain field calculation exists. The deformation of the material under the action of external force is a continuous process, and the deformation quantity at the current moment has a certain relation with the past deformation. Therefore, better strain field prediction accuracy can be obtained by processing image data at more times in combination with time series than by processing image data at only one time. The convolutional LSTM neural network represents strong performance and theoretical advantages in solving the space-time sequence prediction problem by combining the capability of the convolutional neural network to process the space problem with the capability of the LSTM to solve the time sequence problem. And further combining 2D convolution and 3D convolution to extract and refine spatial features, and supplementing high-frequency details and up-sampling by using transposed convolution. Therefore, the method for measuring and tracking the multi-scale strain field at the tip of the material crack based on deep learning can better solve the problems of the traditional 3D-DIC.
Application publication No. CN112233104A, a real-time displacement field and strain field detection method, system, device and storage medium, which relates to machine vision technology, comprising the following steps: acquiring a first image and a first configuration parameter; segmenting the first image according to the first configuration parameters to obtain a plurality of first sub-images; extracting a first feature of each first sub-graph; acquiring a second image and a second configuration parameter; segmenting the second image according to the second configuration parameters to obtain a plurality of second sub-images; performing feature search according to the first features of the first subgraphs, determining second positions of the first features of the first subgraphs in corresponding second subgraphs, and obtaining second center coordinates of the first subgraphs according to the second positions; and obtaining a strain field according to the first central coordinate and the second central coordinate of each first subgraph. This scheme can promote the detection efficiency who answers a journey the field greatly. The strain field calculation efficiency is improved by adopting an image segmentation mode, so that the strain field calculation method can be operated in real time. However, the technology still belongs to the traditional two-dimensional displacement field and strain field measurement range, and the displacement on three dimensions cannot be accurately detected. The method extracts and refines image characteristics through a series of 2D and 3D convolutional layers, calculates a three-dimensional strain field by utilizing the transposed convolutional layers to perform upsampling, obtains a global three-dimensional strain field with higher crack area precision by fusing three-dimensional strain fields of binocular cameras with different focal lengths, and finally tracks the crack area through the long-focus camera and the convolutional LSTM neural network.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A deep learning-based multi-scale strain field measurement tracking method for a material crack tip is provided. The technical scheme of the invention is as follows:
a deep learning-based multi-scale strain field measurement tracking method for a material crack tip comprises the following steps:
randomly spraying and spraying speckles on the surface of the material, applying an external force to the material to deform the material and generate cracks, and collecting multi-scale information of the deformation of the material by using a camera combination with different focal lengths;
constructing a multi-scale material deformation image sequence as a data set, wherein the multi-scale material deformation image sequence data set comprises a training set and a testing set;
establishing a neural network model for measuring a material global three-dimensional strain field by inputting an image sequence of multi-scale material deformation by combining convolution, transposition convolution and convolution LSTM neural network;
training a material three-dimensional strain field measurement neural network model by using training set data;
the trained three-dimensional strain field of the material is used for measuring a neural network model, multi-scale images collected by a camera are input, the three-dimensional strain field of the material is measured in real time, the crack area of the material is calculated through the strain field, and then the binocular tele-camera is moved to track the tip of the crack in real time.
Further, the speckle is randomly sprayed on the surface of the material, an external force is applied to the material to deform the material and generate cracks, and the multi-scale information of the material deformation is collected by using a camera combination with different focal lengths, which specifically comprises the following conditions:
randomly and uniformly spraying speckles on the surface of the material, wherein the color of the speckles and the background of the material are black and white respectively;
arranging a binocular long-focus camera and a binocular short-focus camera on an XYZ precision moving platform to acquire images simultaneously, wherein the binocular long-focus camera is positioned between the two short-focus cameras;
all cameras adopt an external triggering mode to ensure that images are acquired at the same time, the image acquisition frequency of the cameras is kept constant, and the white balance and exposure time of the cameras are kept constant in the acquisition process;
all cameras continuously acquire images at the same time to record the deformation process of the material under the action of external force, wherein the binocular short-focus camera is fixed and used for acquiring the overall image information of the material, and the binocular long-focus camera controls and tracks the tiny area of the crack tip of the acquired material by utilizing an XYZ precision moving platform to record the detail change of the crack area of the material.
Further, the constructing a multi-scale material deformation image sequence as a data set specifically includes:
accurately calibrating the relative position between two groups of binocular cameras and a camera imaging model respectively, and determining the position, size and shape of a material;
the computer simulation method is characterized in that camera parameters and position relation data obtained by calibration are used for establishing a simulation model, wherein speckles on the surface of a material in the model are obtained by simulation of an existing speckle generator, a public image 3D-DIC data set and experimental acquisition;
carrying out random three-dimensional deformation on a material in a computer simulation model and simulating the generation of cracks, calculating the imaging results of the material with simulated speckles in all cameras according to a three-dimensional imaging model, moving a binocular tele camera to track the tips of the cracks in the generation process of the cracks, and recording detailed information of crack extension;
acquiring accurate three-dimensional deformation strain field data of the material model and projections of the model in all cameras through multiple times of simulation;
adding certain noise into image data acquired or simulated by a camera as original data, wherein the noise is typical sensor noise simulating an actual linear camera, a three-dimensional strain field generated by deformation is used as a corresponding real result in the deformation process, and the image and the strain field form a material deformation data set which comprises a training set and a testing set.
Further, the establishing of the neural network model for measuring the material global three-dimensional strain field by inputting the image sequence of the multi-scale material deformation by combining convolution, transposed convolution and convolution LSTM neural network specifically includes:
the method comprises the following steps of combining a 2D convolution, a 3D convolution, a transposed convolution and a convolution LSTM neural network to construct a three-dimensional strain field which can simultaneously calculate the overall situation of a material and a crack area, namely a multi-scale material three-dimensional strain field measurement neural network model, wherein the model carries out feature extraction and refinement through a 2D convolution layer and a 3D convolution layer, the convolution LSTM layer and the space-time features used for fusing strain fields with different scales, and the transposed convolution layer is used for calculating the three-dimensional strain field;
the multi-scale material three-dimensional strain field measurement neural network model has four inputs, the four inputs are four groups of image sequences acquired by a long-focus binocular camera and a short-focus binocular camera respectively, the image sequences are acquired by the four cameras synchronously, and the deformation process of the material under the action of external force is recorded;
respectively extracting features of long-focus or short-focus binocular images by a 2D convolutional layer of a multi-scale material three-dimensional strain field measurement neural network model, combining the results of the feature extraction together, inputting the combined results into a 3D convolutional layer, and then obtaining a three-dimensional strain field by transposing the convolutional layer; the long-focus image sequence obtains a three-dimensional strain field of a crack region, and the short-focus image sequence obtains a global three-dimensional strain field;
inputting two three-dimensional strain field results into a convolution LSTM layer together, and finally fusing the two strain fields by transposing the convolution layer to obtain a three-dimensional strain field with higher precision, wherein the three-dimensional strain field combines characteristic information of different scales and time to provide a crack region strain field measurement result;
the two three-dimensional strain field measurement networks respectively corresponding to the long-focus image sequence and the short-focus image sequence have the same structure, images acquired by the long-focus binocular camera comprise obvious cracks, and parameters of the two networks are mutually independent.
Further, the training of the material three-dimensional strain field measurement neural network model by using the training set data specifically includes:
four data inputs for training a multi-scale material three-dimensional strain field measurement neural network model are provided, and the four data inputs are image sequences acquired by a long-focus binocular camera and a short-focus binocular camera respectively;
the output data of the multi-scale material three-dimensional strain field measurement neural network are a global three-dimensional strain field of material deformation, a three-dimensional strain field of a crack area and a fused three-dimensional strain field;
the calculation results of all three-dimensional strain fields adopt an average error function to evaluate the error between the model estimation result and the real result;
the training process of the network model adopts a multi-stage training method, which is mainly divided into three training stages: the method comprises a three-dimensional strain field measurement network training stage, a three-dimensional strain field fusion training stage and a network fine tuning training stage.
The training process of the network parameters is to reversely propagate the total error through a chain rule and train the network by using an Adam gradient descent optimization algorithm: and optimizing and learning the neural network parameters by using the updated theta.
Further, the calculation results of all three-dimensional strain fields adopt the following average error function to evaluate the error between the model estimation result and the real result, and the specific formula is
Figure BDA0003377028770000061
Wherein (u)e,ve,we) Represents the calculated strain in the horizontal, vertical and depth directions, (u)g,vg,wg) Representing the true strain in the horizontal, vertical, depth direction, (i, j) representing the pixel coordinates, and K and L representing the region where the AEE value is calculated.
Further, the three training phases are respectively:
a first training stage: network parameters of the fixed characteristic fusion part are unchanged, and a long-focus three-dimensional strain field measurement network and a short-focus three-dimensional strain field measurement network are respectively trained by using the three-dimensional strain field errors of the long focus and the short focus;
a second training stage: network parameters of the fixed long-focus and short-focus three-dimensional strain field measurement part are unchanged, and the network parameters of the feature fusion part are trained by utilizing the fused three-dimensional strain field errors;
a third training stage: continuing to train the network on the basis of the trained network parameters of the first stage and the second stage, and finely adjusting all network parameters to obtain a more accurate strain field measurement model and Error of the third stageallStrain field Error by tele network1Strain field Error of short focus network2And fused strain field Error3The components are combined together;
Errorall=Error1+Error2+Error3
further, the network is trained by using an Adam gradient descent optimization algorithm:
first by computing the first moment estimate p and the second moment estimate v of the gradient:
Figure BDA0003377028770000062
Figure BDA0003377028770000063
where l is the number of iterations, θ is the parameter vector, E (θ) is the loss function, β1And beta2Gradient attenuation factors representing first and second order moment estimates, respectively;
and obtaining an updated value theta according to the p and v obtained by calculation and by combining the learning rate alpha and the minimum deviation epsilon.
Figure BDA0003377028770000071
And the updated theta is utilized to optimize and learn the parameters of the neural network, so that the accuracy of the network is improved.
Further, the method for measuring the neural network model by using the trained three-dimensional strain field of the material includes inputting a multi-scale image acquired by a camera, measuring the three-dimensional strain field of the material in real time, calculating a crack area of the material through the strain field, and then moving a binocular tele-camera to track the crack tip in real time, and specifically includes:
adjusting the acquired image into a format the same as that of the data set, and inputting the image into the constructed multi-scale material three-dimensional strain field measurement neural network model;
firstly, calculating three-dimensional strain fields [ u ] of long-focus and short-focus images by using a multi-scale material three-dimensional strain field measurement neural network model0i,j,v0i,j,w0i,j]Then, a fused three-dimensional strain field [ u ] is obtained by the fusion of the convolution LSTM and the transposed convolution layer1i,j,v1i,j,w1i,j](ii) a Wherein u, v, w represent the strain in the horizontal, vertical, depth direction, respectively, and i, j represent the pixel coordinates in the image.
Finding out the position of the crack tip in a mode of solving a derivative of the strain field;
Figure BDA0003377028770000072
Figure BDA0003377028770000073
wherein
Figure BDA0003377028770000074
And
Figure BDA0003377028770000075
respectively, the first derivatives of the image in the x-direction and the y-direction, and f (x, y) the pixel value of the image at the coordinates (x, y). According to the practical situation of experiment, a specific threshold value k is set, and the specific threshold value k is searched
Figure BDA0003377028770000076
And
Figure BDA0003377028770000077
a value greater than k corresponds to a coordinate, and a crack region in the image can be found.
Moving an XYZ precision moving platform to track the extension of the crack in real time according to the calculated position of the crack tip, so that the tele binocular camera can always move along the extension direction of the crack;
and continuously repeating the above processes to calculate the three-dimensional strain field in the material deformation process in real time.
The invention has the following advantages and beneficial effects:
according to the method, a deformation process of shooting and recording materials is combined through different focal length cameras, a material deformation multi-scale image data set is constructed, and a neural network capable of fusing image information of different scales to measure a three-dimensional strain field of the materials is constructed by combining 2D convolution, 3D convolution, transposition convolution and convolution LSTM; training the neural network model by using the data of the training set; and finally, calculating a material deformation image strain field under the real condition acquired by a camera by utilizing the trained multi-scale material three-dimensional strain field measurement neural network model, and tracking the crack area in real time through an XYZ precision mobile platform. The method can accurately measure the global three-dimensional strain field of the material, and can improve the calculation precision and efficiency of the three-dimensional strain field compared with the traditional 3D-DIC model due to the strong space-time characteristic extraction and processing capability of the convolution LSTM neural network.
The innovation points of the invention are as follows:
1. according to the method, the multi-scale image information in the material deformation process is acquired through the camera combination with different focal lengths, so that more detailed information of the crack area can be acquired, and the three-dimensional strain field of the crack area can be calculated more accurately. And further fusing the three-dimensional strain field of the crack region and the global three-dimensional strain field, wherein the precision of the three-dimensional strain field obtained by measurement is higher than that of the three-dimensional strain field obtained by single-scale calculation.
2. The method comprises the steps of calculating the global three-dimensional strain field of the material in real time through a GPU, finding out the region of the crack tip, and driving a tele camera to track the position of the crack tip through an XYZ precision moving platform. This solution enables to handle the case of rapid propagation of the crack beyond the visible area during the experiment.
3. The traditional strain field calculation method is to construct an objective function by using a digital image correlation method to perform optimization iterative solution. The application of deep learning in the field of digital images mainly focuses on target detection and image segmentation, and in the field of strain field measurement, the difficulty in obtaining an accurate data set limits the application of deep learning. The invention innovatively utilizes a deep learning mode to realize the calculation of the three-dimensional strain field, adopts a simulation mode to construct a high-precision data set, and provides a brand-new calculation idea and method for the calculation of the three-dimensional strain field of the material. The method can effectively solve the problem that the three-dimensional strain field under large parallax and large deformation cannot be accurately calculated by the traditional method.
4. The invention generates the data set for deep learning training by using a mode of generating speckles and deformation through computer simulation, ensures the accuracy and precision of the data set, and provides reliable data for training a high-precision neural network model.
5. The essence of the deformation process of the material is that the spatial characteristics change along with the change of time, and the convolution LSTM neural network used in the invention can well combine the time and the spatial characteristics to extract and process the characteristics. The 3D convolution layer used in the method can well extract the characteristics in the depth direction, and plays an important role in the calculation of the three-dimensional strain field.
Drawings
FIG. 1 is a flow chart illustrating an implementation of a deep learning based multi-scale strain field measurement tracking method for a material crack tip in accordance with an exemplary embodiment of the present invention;
FIG. 2 is a flow chart illustrating the calculation of a three-dimensional strain field using the present invention according to an exemplary embodiment;
FIG. 3 is a multi-scale material deformation measurement system shown in accordance with an exemplary embodiment
FIG. 4 is a schematic diagram illustrating a process of deforming a material according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a multi-scale material three-dimensional strain field measurement neural network model according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating an implementation of a method for computing a tracked crack region and computing a three-dimensional strain field using the present invention in accordance with an exemplary embodiment;
FIG. 7 is a graph illustrating the results of a three-dimensional strain field according to an exemplary embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the flow chart of the implementation of the deep learning-based multi-scale strain field measurement tracking method for the tip of the crack of the material is shown in fig. 1 and specifically comprises the following 5 steps:
step 1, randomly spraying speckles on the surface of a material, applying an external force to the material to deform the material and generate cracks, and collecting multi-scale information of material deformation by using camera combinations with different focal lengths.
And 2, constructing a multi-scale material deformation image sequence as a data set, wherein the multi-scale material deformation image sequence data set comprises a training set and a testing set.
And 3, combining convolution, transposition convolution and convolution LSTM neural network to establish a neural network model for measuring the material global three-dimensional strain field by inputting the image sequence of multi-scale material deformation.
And 4, training the material three-dimensional strain field measurement neural network model by using the training set data.
And 5, measuring the neural network model by using the trained three-dimensional strain field of the material, inputting a multi-scale image acquired by a camera, measuring the three-dimensional strain field of the material in real time, calculating a crack area of the material through the strain field, and then moving a binocular tele-camera to track the tip of the crack in real time.
The flow of the specific steps for calculating the three-dimensional strain field by means of this model is shown in fig. 2.
As a possible implementation manner of this embodiment, the step 1 data acquisition system is shown in fig. 3, and includes the following steps:
and 11, randomly and uniformly spraying speckles on the surface of the material, wherein the color of the speckles has higher contrast with the background of the material, and the speckles are generally black and white respectively.
And 12, arranging a binocular long-focus camera and a binocular short-focus camera on the XYZ precision moving platform to acquire images simultaneously, wherein the binocular long-focus camera is positioned between the two short-focus cameras.
And step 13, ensuring that images are acquired at the same time by all the cameras in an external triggering mode, wherein the image acquisition frequency of the cameras is kept constant. The white balance and exposure time of the camera should be kept constant during the acquisition process.
And step 14, continuously acquiring images by all the cameras at the same time so as to record the deformation process of the material under the action of external force. The binocular short-focus camera is fixed and used for collecting the global image information of the material, and the binocular long-focus camera controls and tracks a tiny region of a crack tip of the collected material by utilizing an XYZ precision moving platform so as to record the detail change of the crack region of the material.
As a possible implementation manner of this embodiment, the step 2 data set generates an image as shown in fig. 4, and includes the following steps:
and step 21, obtaining the multi-scale material deformation data set through computer simulation. Before the data set is manufactured, the relative position between two groups of binocular cameras and a camera imaging model need to be accurately calibrated respectively, and the position, the size and the shape of a material are determined.
And step 22, the computer simulation method is used for establishing a simulation model through data such as camera parameters, position relations and the like obtained by calibration, and speckles on the surface of the material in the model can be obtained through simulation of an existing speckle generator, a public image 3D-DIC data set and experimental acquisition.
And step 23, carrying out random three-dimensional deformation on the material in the computer simulation model, simulating the generation of cracks, and calculating the imaging results of the material with simulated speckles in all cameras according to the three-dimensional imaging model. And in the generation process of the crack, moving the binocular tele-camera to track the crack tip and recording the detailed information of the crack extension.
And step 23, acquiring accurate three-dimensional deformation strain field data of the material model and projections of the model in all cameras through multiple times of simulation.
Step 24, adding certain noise into the image data acquired or simulated by the camera as raw data, wherein the noise is typical sensor noise simulating an actual linear camera. And the three-dimensional strain field generated by deformation is used as a corresponding real result in the deformation process. The image and strain field constitute a material deformation dataset that includes a training set and a test set.
As a possible implementation manner of this embodiment, the three-dimensional multi-scale material three-dimensional strain field measurement neural network model constructed in step 3 is shown in fig. 5, and includes the following steps:
and step 31, combining the 2D convolution, the 3D convolution, the transposition convolution and the convolution LSTM neural network to construct a neural network model which can simultaneously calculate the material global and crack region three-dimensional strain fields and fuse the two strain fields to obtain a finer strain field, and the neural network model is hereinafter referred to as a multi-scale material three-dimensional strain field measurement neural network model. The model carries out feature extraction and refinement through a 2D convolutional layer and a 3D convolutional layer, the LSTM layer is convolved with space-time features used for fusing strain fields with different scales, and the convolutional layer is transposed and used for calculating a three-dimensional strain field. And step 32, the multi-scale material three-dimensional strain field measurement neural network model has four inputs, and the four inputs are four groups of image sequences acquired by the long-focus binocular camera and the short-focus binocular camera respectively. The image sequence is acquired synchronously by four cameras, and records the deformation process of the material under the action of external force.
And step 33, respectively performing feature extraction on the long-focus or short-focus binocular images by the 2D convolutional layer of the multi-scale material three-dimensional strain field measurement neural network model, combining the results of the feature extraction together and inputting the combined results into the 3D convolutional layer, and then obtaining the three-dimensional strain field by means of the transposition convolutional layer. The long-focus image sequence obtains a three-dimensional strain field of a crack region, and the short-focus image sequence obtains a global three-dimensional strain field.
And step 34, inputting the results of the two three-dimensional strain fields into the convolution LSTM layer together, and finally fusing the two strain fields by transposing the convolution layer to obtain a three-dimensional strain field with higher precision. The three-dimensional strain field combines characteristic information of different scales and time, and can provide strain field measurement results with higher precision, particularly crack regions.
And step 35, the two three-dimensional strain field measurement networks respectively corresponding to the long-focus image sequence and the short-focus image sequence have the same structure, but the parameters of the two networks are independent of each other because the images acquired by the long-focus binocular camera have more details and include obvious cracks.
As a possible implementation manner of this embodiment, the step 4 includes the following steps:
and step 41, four data inputs are provided for training the multi-scale material three-dimensional strain field measurement neural network model, the four data inputs are respectively image sequences acquired by a long-focus binocular camera and a short-focus binocular camera, and the input data formats are [ n, h, w, c ], wherein n is the number of data frames, h is the height of an input image, w is the width of the input image, and c is the number of channels. Since the input image is a grayscale map, c is 1.
And 42, outputting data of the multi-scale material three-dimensional strain field measurement neural network, wherein the data are a global three-dimensional strain field of material deformation, a crack region three-dimensional strain field and a fused three-dimensional strain field. The data formats are [ n, h, w, c ], where n is the number of data frames, h is the height of the output image, w is the width of the output image, and c is the number of channels. Since the output strain field is three-dimensional, c is 3 here.
Step 43, the calculation results of all three-dimensional strain fields adopt the following average error function for evaluating the error between the model estimation result and the real result.
Figure BDA0003377028770000121
Wherein (u)e,ve,we) Represents the calculated strain in the horizontal, vertical and depth directions, (u)g,vg,wg) Representing the true strain in the horizontal, vertical, depth direction, (i, j) representing the pixel coordinates, and K and L representing the region where the AEE value is calculated.
Step 44, the training process of the network model adopts a multi-stage training method, which is mainly divided into three training stages: the method comprises a three-dimensional strain field measurement network training stage, a three-dimensional strain field fusion training stage and a network fine tuning training stage.
Step 45, training stage one: and the network parameters of the fixed characteristic fusion part are unchanged, and the long-focus three-dimensional strain field measurement network and the short-focus three-dimensional strain field measurement network are respectively trained by utilizing the three-dimensional strain field errors of the long focus and the short focus.
A second training stage: the network parameters of the fixed long-focus and short-focus three-dimensional strain field measurement part are unchanged, and the network parameters of the feature fusion part are trained by utilizing the fused three-dimensional strain field errors.
A third training stage: and continuing to train the network on the basis of the trained network parameters of the first stage and the second stage, and finely adjusting all network parameters to obtain a more accurate strain field measurement model. Error of stage threeallStrain field Error by tele network1Strain field Error of short focus network2And fused strain field Error3And (4) the components are combined together.
Errorall=Error1+Error2+Error3
Step 45, the training process of the network parameters is to reversely propagate the total error through a chain rule, and train the network by using an Adam gradient descent optimization algorithm:
first by computing the first moment estimate p and the second moment estimate v of the gradient:
Figure BDA0003377028770000131
Figure BDA0003377028770000132
where l is the number of iterations, θ is the parameter vector, E (θ) is the loss function, β1And beta2The gradient decay factors of the first and second moment estimates are represented, respectively.
And obtaining an updated value theta according to the p and v obtained by calculation and by combining the learning rate alpha and the minimum deviation epsilon.
Figure BDA0003377028770000133
And the updated theta is utilized to optimize and learn the parameters of the neural network, so that the accuracy of the network is improved.
As a possible implementation manner of this embodiment, the model implementation process and the output result of step 5 as shown in fig. 5 and 6 include the following steps:
and 51, adjusting the image acquired in the step 2 into the format same as that of the data set, and inputting the image into the constructed multi-scale material three-dimensional strain field measurement neural network model.
Step 52, the multi-scale material three-dimensional strain field measurement neural network model firstly calculates the three-dimensional strain fields of the long-focus and short-focus images, and then obtains a more accurate three-dimensional strain field through the fusion of the convolution LSTM and the transposed convolution layer.
In step 53, the three-dimensional strain field in the crack region may exhibit a large fluctuation compared to the gradual strain field variation trend in the normal region. The position of the crack tip can thus be found by means of taking the derivative of the strain field.
And 53, moving the XYZ precision moving platform to track the extension of the crack in real time according to the calculated position of the crack tip, so that the tele binocular camera can always move along the extension direction of the crack.
And step 54, continuously repeating the above process to calculate the three-dimensional strain field in the material deformation process in real time, wherein the calculation result of the strain field in the material crack area is more accurate due to the fusion of multi-scale image information.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (9)

1. A deep learning-based multi-scale strain field measurement tracking method for a material crack tip is characterized by comprising the following steps:
randomly spraying and spraying speckles on the surface of the material, applying an external force to the material to deform the material and generate cracks, and collecting multi-scale information of the deformation of the material by using a camera combination with different focal lengths;
constructing a multi-scale material deformation image sequence as a data set, wherein the multi-scale material deformation image sequence data set comprises a training set and a testing set;
establishing a neural network model for measuring a material global three-dimensional strain field by inputting an image sequence of multi-scale material deformation by combining convolution, transposition convolution and convolution LSTM neural network;
training a material three-dimensional strain field measurement neural network model by using training set data;
the trained three-dimensional strain field of the material is used for measuring a neural network model, multi-scale images collected by a camera are input, the three-dimensional strain field of the material is measured in real time, the crack area of the material is calculated through the strain field, and then the binocular tele-camera is moved to track the tip of the crack in real time.
2. The deep learning-based multi-scale strain field measurement and tracking method for the crack tip of the material as claimed in claim 1, wherein the method comprises the steps of randomly spraying speckles on the surface of the material, applying an external force to the material to deform the material and generate cracks, and collecting multi-scale information of material deformation by using a combination of cameras with different focal lengths, wherein the method specifically comprises the following conditions:
randomly and uniformly spraying speckles on the surface of the material, wherein the color of the speckles and the background of the material are black and white respectively;
arranging a binocular long-focus camera and a binocular short-focus camera on an XYZ precision moving platform to acquire images simultaneously, wherein the binocular long-focus camera is positioned between the two short-focus cameras;
all cameras adopt an external triggering mode to ensure that images are acquired at the same time, the image acquisition frequency of the cameras is kept constant, and the white balance and exposure time of the cameras are kept constant in the acquisition process;
all cameras continuously acquire images at the same time to record the deformation process of the material under the action of external force, wherein the binocular short-focus camera is fixed and used for acquiring the overall image information of the material, and the binocular long-focus camera controls and tracks the tiny area of the crack tip of the acquired material by utilizing an XYZ precision moving platform to record the detail change of the crack area of the material.
3. The deep learning-based material crack tip multi-scale strain field measurement tracking method according to claim 1, wherein the constructing a multi-scale material deformation image sequence as a data set specifically comprises:
accurately calibrating the relative position between two groups of binocular cameras and a camera imaging model respectively, and determining the position, size and shape of a material;
the computer simulation method is characterized in that camera parameters and position relation data obtained by calibration are used for establishing a simulation model, wherein speckles on the surface of a material in the model are obtained by simulation of an existing speckle generator, a public image 3D-DIC data set and experimental acquisition;
carrying out random three-dimensional deformation on a material in a computer simulation model and simulating the generation of cracks, calculating the imaging results of the material with simulated speckles in all cameras according to a three-dimensional imaging model, moving a binocular tele camera to track the tips of the cracks in the generation process of the cracks, and recording detailed information of crack extension;
acquiring accurate three-dimensional deformation strain field data of the material model and projections of the model in all cameras through multiple times of simulation;
adding certain noise into image data acquired or simulated by a camera as original data, wherein the noise is typical sensor noise simulating an actual linear camera, a three-dimensional strain field generated by deformation is used as a corresponding real result in the deformation process, and the image and the strain field form a material deformation data set which comprises a training set and a testing set.
4. The deep learning-based material crack tip multi-scale strain field measurement tracking method according to claim 3, wherein the building of a neural network model for measuring a material global three-dimensional strain field by inputting an image sequence of multi-scale material deformation by combining convolution, transposed convolution and convolution LSTM neural networks specifically comprises:
the method comprises the following steps of combining a 2D convolution, a 3D convolution, a transposed convolution and a convolution LSTM neural network to construct a three-dimensional strain field which can simultaneously calculate the overall situation of a material and a crack area, namely a multi-scale material three-dimensional strain field measurement neural network model, wherein the model carries out feature extraction and refinement through a 2D convolution layer and a 3D convolution layer, the convolution LSTM layer and the space-time features used for fusing strain fields with different scales, and the transposed convolution layer is used for calculating the three-dimensional strain field;
the multi-scale material three-dimensional strain field measurement neural network model has four inputs, the four inputs are four groups of image sequences acquired by a long-focus binocular camera and a short-focus binocular camera respectively, the image sequences are acquired by the four cameras synchronously, and the deformation process of the material under the action of external force is recorded;
respectively extracting features of long-focus or short-focus binocular images by a 2D convolutional layer of a multi-scale material three-dimensional strain field measurement neural network model, combining the results of the feature extraction together, inputting the combined results into a 3D convolutional layer, and then obtaining a three-dimensional strain field by transposing the convolutional layer; the long-focus image sequence obtains a three-dimensional strain field of a crack region, and the short-focus image sequence obtains a global three-dimensional strain field;
inputting two three-dimensional strain field results into a convolution LSTM layer together, and finally fusing the two strain fields by transposing the convolution layer to obtain a three-dimensional strain field with higher precision, wherein the three-dimensional strain field combines characteristic information of different scales and time to provide a crack region strain field measurement result;
the two three-dimensional strain field measurement networks respectively corresponding to the long-focus image sequence and the short-focus image sequence have the same structure, images acquired by the long-focus binocular camera comprise obvious cracks, and parameters of the two networks are mutually independent.
5. The deep learning-based material crack tip multi-scale strain field measurement tracking method according to claim 4, wherein the training of the material three-dimensional strain field measurement neural network model by using the training set data specifically comprises:
four data inputs for training a multi-scale material three-dimensional strain field measurement neural network model are provided, and the four data inputs are image sequences acquired by a long-focus binocular camera and a short-focus binocular camera respectively;
the output data of the multi-scale material three-dimensional strain field measurement neural network are a global three-dimensional strain field of material deformation, a three-dimensional strain field of a crack area and a fused three-dimensional strain field;
the calculation results of all three-dimensional strain fields adopt an average error function to evaluate the error between the model estimation result and the real result;
the training process of the network model adopts a multi-stage training method, which is mainly divided into three training stages: the method comprises a three-dimensional strain field measurement network training stage, a three-dimensional strain field fusion training stage and a network fine tuning training stage.
The training process of the network parameters is to reversely propagate the total error through a chain rule and train the network by using an Adam gradient descent optimization algorithm: and optimizing and learning the neural network parameters by using the updated theta.
6. The deep learning-based multi-scale strain field measurement tracking method for the material crack tip as claimed in claim 5, wherein the calculation results of all three-dimensional strain fields adopt the following mean error function to evaluate the error between the model estimation result and the real result, and the specific formula is
Figure FDA0003377028760000041
Wherein (u)e,ve,we) Represents the calculated strain in the horizontal, vertical and depth directions, (u)g,vg,wg) Representing the true strain in the horizontal, vertical, depth direction, (i, j) representing the pixel coordinates, and K and L representing the region where the AEE value is calculated.
7. The deep learning-based material crack tip multi-scale strain field measurement tracking method according to claim 5, wherein the three training phases are respectively:
a first training stage: network parameters of the fixed characteristic fusion part are unchanged, and a long-focus three-dimensional strain field measurement network and a short-focus three-dimensional strain field measurement network are respectively trained by using the three-dimensional strain field errors of the long focus and the short focus;
a second training stage: network parameters of the fixed long-focus and short-focus three-dimensional strain field measurement part are unchanged, and the network parameters of the feature fusion part are trained by utilizing the fused three-dimensional strain field errors;
a third training stage: continuing to train the network on the basis of the trained network parameters of the first stage and the second stage, and finely adjusting all network parameters to obtain a more accurate strain field measurement model and Error of the third stageallStrain field Error by tele network1Strain field Error of short focus network2And fused strain field Error3The components are combined together;
Errorall=Error1+Error2+Error3
8. the deep learning-based material crack tip multi-scale strain field measurement tracking method of claim 7, wherein the network is trained using an Adam gradient descent optimization algorithm:
first by computing the first moment estimate p and the second moment estimate v of the gradient:
pl=β1pl-1+(1-β1)▽E(θl)
vl=β2vl-1+(1-β2)[▽E(θl)]2
where l is the number of iterations, θ is the parameter vector, E (θ) is the loss function, β1And beta2Gradient attenuation factors representing first and second order moment estimates, respectively;
and obtaining an updated value theta according to the p and v obtained by calculation and by combining the learning rate alpha and the minimum deviation epsilon.
Figure FDA0003377028760000051
And the updated theta is utilized to optimize and learn the parameters of the neural network, so that the accuracy of the network is improved.
9. The deep learning-based multi-scale strain field measurement tracking method for the crack tip of the material as claimed in claim 8, wherein the method for measuring the neural network model by using the trained three-dimensional strain field of the material, inputting a multi-scale image acquired by a camera, measuring the three-dimensional strain field of the material in real time, calculating the crack region of the material through the strain field, and then moving a binocular tele-camera to track the crack tip in real time specifically comprises:
adjusting the acquired image into a format the same as that of the data set, and inputting the image into the constructed multi-scale material three-dimensional strain field measurement neural network model;
firstly, calculating three-dimensional strain fields [ u ] of long-focus and short-focus images by using a multi-scale material three-dimensional strain field measurement neural network model0i,j,v0i,j,w0i,j]Then, a fused three-dimensional strain field [ u ] is obtained by the fusion of the convolution LSTM and the transposed convolution layer1i,j,v1i,j,w1i,j](ii) a Wherein u, v, w represent the strain in the horizontal, vertical, depth direction, respectively, and i, j represent the pixel coordinates in the image. (ii) a
Finding out the position of the crack tip in a mode of solving a derivative of the strain field;
Figure FDA0003377028760000052
Figure FDA0003377028760000053
wherein
Figure FDA0003377028760000054
And
Figure FDA0003377028760000055
respectively, the first derivatives of the image in the x-direction and the y-direction, and f (x, y) the pixel value of the image at the coordinates (x, y). According to the practical situation of experiment, a specific threshold value k is set, and the specific threshold value k is searched
Figure FDA0003377028760000056
And
Figure FDA0003377028760000057
a value greater than k corresponds to a coordinate, and a crack region in the image can be found.
Moving an XYZ precision moving platform to track the extension of the crack in real time according to the calculated position of the crack tip, so that the tele binocular camera can always move along the extension direction of the crack;
and continuously repeating the above processes to calculate the three-dimensional strain field in the material deformation process in real time.
CN202111420167.XA 2021-11-26 2021-11-26 Deep learning-based multi-scale strain field measurement tracking method for crack tip of material Pending CN114065650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111420167.XA CN114065650A (en) 2021-11-26 2021-11-26 Deep learning-based multi-scale strain field measurement tracking method for crack tip of material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111420167.XA CN114065650A (en) 2021-11-26 2021-11-26 Deep learning-based multi-scale strain field measurement tracking method for crack tip of material

Publications (1)

Publication Number Publication Date
CN114065650A true CN114065650A (en) 2022-02-18

Family

ID=80276613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111420167.XA Pending CN114065650A (en) 2021-11-26 2021-11-26 Deep learning-based multi-scale strain field measurement tracking method for crack tip of material

Country Status (1)

Country Link
CN (1) CN114065650A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310217A (en) * 2023-03-15 2023-06-23 精创石溪科技(成都)有限公司 Method for dynamically evaluating muscles in human body movement based on three-dimensional digital image correlation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310217A (en) * 2023-03-15 2023-06-23 精创石溪科技(成都)有限公司 Method for dynamically evaluating muscles in human body movement based on three-dimensional digital image correlation method
CN116310217B (en) * 2023-03-15 2024-01-30 精创石溪科技(成都)有限公司 Method for dynamically evaluating muscles in human body movement based on three-dimensional digital image correlation method

Similar Documents

Publication Publication Date Title
CN111199564B (en) Indoor positioning method and device of intelligent mobile terminal and electronic equipment
CN103988226B (en) Method for estimating camera motion and for determining real border threedimensional model
Shi et al. Parametric study on light field volumetric particle image velocimetry
CN103575227B (en) A kind of vision extensometer implementation method based on digital speckle
US20200364509A1 (en) System and method for training a neural network for visual localization based upon learning objects-of-interest dense match regression
CN108416840A (en) A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
KR102219561B1 (en) Unsupervised stereo matching apparatus and method using confidential correspondence consistency
CN108898676A (en) Method and system for detecting collision and shielding between virtual and real objects
CN111563952B (en) Method and system for realizing stereo matching based on phase information and spatial texture characteristics
CN105701455A (en) Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method
CN104268880A (en) Depth information obtaining method based on combination of features and region matching
CN114396877B (en) Intelligent three-dimensional displacement field and strain field measurement method for mechanical properties of materials
CN113393439A (en) Forging defect detection method based on deep learning
Mikheev et al. Enhanced particle-tracking velocimetry (EPTV) with a combined two-component pair-matching algorithm
CN114065650A (en) Deep learning-based multi-scale strain field measurement tracking method for crack tip of material
CN113256800B (en) Accurate and rapid large-field-depth three-dimensional reconstruction method based on deep learning
CN108873363A (en) 3 d light fields imaging system and method based on structure signal
KR100438212B1 (en) Extraction method for 3-dimensional spacial data with electron microscope and apparatus thereof
CN115049784A (en) Three-dimensional velocity field reconstruction method based on binocular particle image
Tu et al. Laser stripe matching algorithm with coplanar constraint in underwater laser scanning systems
CN113532424B (en) Integrated equipment for acquiring multidimensional information and cooperative measurement method
CN115601423A (en) Edge enhancement-based round hole pose measurement method in binocular vision scene
WO2023023961A1 (en) Piv image calibration apparatus and method based on laser linear array
CN110084887B (en) Three-dimensional reconstruction method for space non-cooperative target relative navigation model
CN114511631A (en) Method and device for measuring height of visual object of camera and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination