CN115953638A - Full-field displacement online identification method based on multi-scale depth convolution neural network - Google Patents

Full-field displacement online identification method based on multi-scale depth convolution neural network Download PDF

Info

Publication number
CN115953638A
CN115953638A CN202211391034.9A CN202211391034A CN115953638A CN 115953638 A CN115953638 A CN 115953638A CN 202211391034 A CN202211391034 A CN 202211391034A CN 115953638 A CN115953638 A CN 115953638A
Authority
CN
China
Prior art keywords
scale
displacement
full
convolution module
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211391034.9A
Other languages
Chinese (zh)
Inventor
龙湘云
姜潮
熊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202211391034.9A priority Critical patent/CN115953638A/en
Publication of CN115953638A publication Critical patent/CN115953638A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a full-field displacement online identification method based on a multi-scale depth convolution neural network, which comprises the following steps of 1, establishing a speckle image-displacement data set, and identifying the displacement corresponding to a speckle image; step 2, dividing the speckle image into data sets with different scales according to the displacement of the speckle image, and cutting the speckle image to a corresponding size; and 3, constructing a multi-scale convolutional neural network based on deep learning, automatically judging the scale of the input speckle pattern, and outputting corresponding displacement. The multi-scale neural network realizes accurate prediction of a displacement field by fusing the learned displacement information on a plurality of single scales, and verifies the effectiveness of the method by a design experiment, and the result shows that the provided full-field displacement online identification method can provide an accurate displacement prediction result.

Description

Full-field displacement online identification method based on multi-scale depth convolution neural network
Technical Field
The invention relates to the technical field of multi-scale depth convolution neural networks, in particular to a full-field displacement online identification method based on a multi-scale depth convolution neural network.
Background
The displacement field is one of the physical quantities which most visually reflect the current loaded state of the physical entity, and can be used for representing the rigidity of materials, evaluating the damage state of key components of the structure and the like. The existing displacement field measurement method is mainly a Digital Image Correlation (DIC) method, and is widely applied to surface displacement measurement due to simple operation and low environmental requirements. This method involves tracking (or matching) the same point between two images, each time the tracking process involves an optimization calculation, so when the identified spot image area is large, the calculation of the displacement field can be very time consuming, making it difficult for the DIC method to achieve online real-time prediction of the displacement field. In recent years, artificial intelligence techniques represented by deep learning methods have been developed rapidly, and have been successfully used in the fields of face recognition, semantic segmentation, automatic driving, and the like. In order to realize online prediction of a displacement field, a 3D deep Convolutional Neural Network (CNN) is constructed by a small amount of research work based on deep learning to predict displacement of a speckle image, but the method has insufficient prediction precision and is only suitable for small deformation because only the single-scale convolutional neural network of the single-scale subset image is used for model training.
Disclosure of Invention
In order to solve the problem, the invention provides a full-field displacement online identification method of a multi-scale neural network model, which realizes accurate prediction of a displacement field based on the displacement information learned by fusing a plurality of single scales.
The full-field displacement online identification method based on the multi-scale depth convolution neural network is characterized by comprising the following steps of:
step 1, establishing a speckle image-displacement data set, and identifying the displacement corresponding to the speckle image;
step 2, dividing the speckle image into data sets with different scales according to the displacement of the speckle image, and cutting the speckle image to a corresponding size;
step 3, constructing a multi-scale convolution neural network based on deep learning, automatically judging the scale of the input speckle pattern, and outputting corresponding displacement;
the multi-scale convolutional neural network comprises a feature extraction module and a multi-scale decision fusion module;
the feature extraction module comprises a small-scale CNN network, a medium-scale CNN network, a large-scale CNN network and a full-scale CNN network, and respectively outputs a small-scale feature vector, a medium-scale feature vector, a large-scale feature vector and a full-scale feature vector;
the multi-scale decision fusion module generates a coefficient matrix from the feature vectors of four different scales, and generates an output vector by fusing the coefficient matrix with the small-scale feature vector, the medium-scale feature vector and the large-scale feature vector.
More further, the small-scale CNN network includes a first small-scale convolution module, an average pooling layer, a second small-scale convolution module, a third small-scale convolution module, a fourth small-scale convolution module, a fifth small-scale convolution module, and a sixth small-scale convolution module, which are connected in sequence;
the mesoscale CNN network comprises a first mesoscale convolution module, an average pooling layer, a second mesoscale convolution module, a third mesoscale convolution module, a fourth mesoscale convolution module, a fifth mesoscale convolution module and a sixth mesoscale convolution module which are connected in sequence;
the large-scale CNN network comprises a first large-scale convolution module, an average pooling layer, a second large-scale convolution module, a third large-scale convolution module, a fourth large-scale convolution module, a fifth large-scale convolution module, a sixth large-scale convolution module and a seventh large-scale convolution module which are connected in sequence;
the full-scale CNN network comprises a first full-scale convolution module, an average pooling layer, a second full-scale convolution module, a third full-scale convolution module, a fourth full-scale convolution module, a fifth full-scale convolution module, a sixth full-scale convolution module and a seventh full-scale convolution module which are connected in sequence;
each convolution module comprises a convolution kernel, a normalization unit and a Relu function unit which are connected in sequence and have different numbers and sizes.
Further, in step 3, in the multi-scale fusion module, the output data of four different scales are overlapped together and then enter the full-link layer to generate a 6 × 1 vector. The vector is reshaped and then passed through a SoftMax layer to generate a 3 x 2 matrix of coefficients
Figure SMS_1
In the coefficient matrix, the following equation is satisfied:
Figure SMS_2
wherein, c j And d j Elements of the coefficient matrix of each scale are represented and are all in [0,1 ]]Within the range of (1); finally, the displacement vector corresponding to the input subset image can be obtained as follows:
Figure SMS_3
wherein u is 1 Is the abscissa, v, of a small-scale feature vector 1 Is the ordinate, u, of a small-scale feature vector 2 Is the abscissa, v, of the mesoscale feature vector 2 Is the ordinate, u, of the mesoscale feature vector 3 Is the abscissa, v, of the large-scale feature vector 3 Is the ordinate of the large scale feature vector.
More further, the step 3 further comprises a displacement field multi-scale convolution neural network staged training method:
training each single-scale network independently, and freezing parameters after training and verification; and then training the multi-scale fusion module, and only updating the weight parameters of the multi-scale fusion module.
Further, in step 3, the Mean Square Error (MSE) of the deviation between the predicted and actual values of the model is used to define:
Figure SMS_4
where n is the amount of data used in each iteration step,
Figure SMS_5
is the predicted displacement of the ith input speckle image,
Figure SMS_6
is the actual displacement of the ith input speckle image, i =1,2, \ 8230;, n;
based on the training data set, the optimal values of the model parameters, including weights W and b, are learned by minimizing the loss values:
Figure SMS_7
when the weight W and the bias b are obtained, the coefficient c is fused i And d i May be calculated by a decision layer fusion module.
The invention achieves the following beneficial effects:
the patent provides a full-field displacement automatic identification method based on speckle images and deep learning and an intelligent agent, and online identification of displacement fields can be realized. In the new method, a multi-scale neural network model suitable for the displacement field is developed, and accurate prediction of the displacement field is realized by fusing the learned displacement information on a plurality of single scales. The effectiveness of the method is verified through a design experiment, and the result shows that the full-field displacement online identification method can provide an accurate displacement prediction result. In addition, the intelligent agent capable of automatically identifying the displacement field through the image is developed by integrating the algorithm into the mobile device.
The multi-scale convolutional neural network can adaptively extract features aiming at the deformation of different scales, so that the model can recognize the displacement of small deformation and also can recognize the displacement of large deformation. Moreover, displacement identification of each scale achieves extremely high precision, and relative errors of all data on a test set are less than 2%.
Drawings
FIG. 1 is a flow chart of a full-field displacement online identification method based on a multi-scale depth convolution neural network;
FIG. 2 is a frame diagram of a displacement field recognition multi-scale convolution neural network model in a full-field displacement online recognition method based on a multi-scale depth convolution neural network;
FIG. 3 is a schematic diagram of a displacement field speckle image multi-scale feature extraction module in a full-field displacement online identification method based on a multi-scale depth convolution neural network;
FIG. 4 is a schematic diagram of a multi-scale decision-level fusion module of displacement field information in a full-field displacement online identification method based on a multi-scale depth convolutional neural network;
FIG. 5 is a schematic diagram of a displacement field multi-scale CNN training experiment scheme in a full-field displacement online identification method based on a multi-scale depth convolution neural network;
FIG. 6 is a schematic diagram of a displacement field recognition result based on a single scale CNN in a full-field displacement online recognition method based on a multi-scale deep convolutional neural network;
FIG. 7 is a schematic diagram showing comparison of displacement field recognition accuracy based on single-scale CNN and multi-scale CNN in a full-field displacement online recognition method based on a multi-scale deep convolutional neural network;
FIG. 8 is a diagram illustrating a displacement field recognition result based on a multi-scale CNN in a full-field displacement online recognition method based on a multi-scale deep convolutional neural network;
fig. 9 is a schematic view of the assembly of a displacement field recognition intelligent device in a full-field displacement online recognition method based on a multi-scale depth convolution neural network.
Detailed Description
The technical solution of the present invention will be described in more detail with reference to the accompanying drawings, and the present invention includes, but is not limited to, the following embodiments.
As shown in the attached figure 1, the invention provides a full-field displacement online identification method based on a multi-scale depth convolution neural network, which comprises the following steps:
step 1, establishing a speckle image-displacement data set, and identifying the displacement corresponding to the speckle image;
in step 1, a plurality of sample pictures with speckles sprayed on the upper surface of an axial/torsion test system are collected through a camera, wherein the sample pictures comprise a reference image and a deformation image, the reference image is a highlight of the sample before the test, and the deformation image is an image of the sample in the test.
Selecting a rectangular area from the acquired images as an analysis area, respectively superposing each deformation image and an original image to form a superposed image with speckles, and using the images for training and verifying a CNN model and as test data to verify the validity of the CNN model, wherein the proportion of the training, verifying and testing data is 7.
Step 2, dividing the speckle image into data sets with different scales according to the displacement of the speckle image, and cutting the speckle image to a corresponding size;
in step 2, key points are selected from the stacked speckle images based on the superimposed image using a sliding sampling strategy, with a step size of the sliding window of 5 pixels. Then, with the key point as the center, the subset images with different sizes are cut out. And generating a small-scale data set, a medium-scale data set and a large-scale data set according to the size of the subset image. For different scale subset images, the central points are consistent, so that the data volumes of the three data sets are the same, and the image centers and displacement labels corresponding to different data sets are the same.
And 3, constructing a multi-scale convolution neural network based on deep learning, automatically judging the scale of the input speckle pattern, and outputting corresponding displacement.
As shown in fig. 2, in step 3, the model is composed of three parts: the system comprises a feature extraction module, a single-scale output module and a multi-scale fusion output module, and simultaneously comprises the following steps:
step 31, displacement field multi-scale convolution neural network feature extraction:
different from the traditional displacement field identification method based on the single-scale neural network, the method divides the speckle pattern into large-scale, medium-scale and small-scale subset images with different pixels, and each scale carries out feature extraction through a depth convolution layer.
In the multi-scale neural network training, a2 × 48 × 48 pixel subset image with a target value in [0,8] pixels is used as input data of a small-scale CNN, a2 × 68 × 68 pixel subset image with a target value in [4,22] is used as input data of a medium-scale CNN, a2 × 132 × 132 pixel subset image with a target value in [20,54] pixels is used as input data of a large-scale CNN, and a target value crossing region exists in a training subset of each scale so as to improve the prediction accuracy of a model at the target value near a cross-scale boundary. A 132 × 132 pixel subset image with a target value in the range of [0,54] pixels is used as input data for the full-scale CNN.
As shown in fig. 3, the network structure of the feature extraction module includes a small-scale CNN network, a medium-scale CNN network, a large-scale CNN network, and a full-scale CNN network. The small-scale CNN network comprises a first small-scale convolution module, an average pooling layer, a second small-scale convolution module, a third small-scale convolution module, a fourth small-scale convolution module, a fifth small-scale convolution module and a sixth small-scale convolution module which are connected in sequence; the mesoscale CNN network comprises a first mesoscale convolution module, an average pooling layer, a second mesoscale convolution module, a third mesoscale convolution module, a fourth mesoscale convolution module, a fifth mesoscale convolution module and a sixth mesoscale convolution module which are connected in sequence; the large-scale CNN network comprises a first large-scale convolution module, an average pooling layer, a second large-scale convolution module, a third large-scale convolution module, a fourth large-scale convolution module, a fifth large-scale convolution module, a sixth large-scale convolution module and a seventh large-scale convolution module which are connected in sequence; the full-scale CNN network comprises a first full-scale convolution module, an average pooling layer, a second full-scale convolution module, a third full-scale convolution module, a fourth full-scale convolution module, a fifth full-scale convolution module, a sixth full-scale convolution module and a seventh full-scale convolution module which are connected in sequence; each convolution module comprises a convolution kernel, a normalization unit and a Relu function unit which are connected in sequence and have different numbers and sizes.
The small-scale CNN network outputs a small-scale feature vector, the medium-scale CNN network outputs a medium-scale feature vector, the large-scale CNN network outputs a large-scale feature vector, and the full-scale CNN network outputs a full-scale feature vector.
32, performing decision-level fusion on the displacement field multi-scale convolution neural network:
the decision fusion module automatically fuses displacement information results of different single scales by designing a neural network, and compared with the traditional empirical fusion, the decision fusion module learns in a fusion mode through the neural network, and is more accurate and intelligent. After the feature extraction module, the extracted features further pass through the fully connected layer and generate displacement results on each individual CNN scale. Then, a multi-scale decision fusion module is proposed to fuse these learned displacements at different scales.
As shown in fig. 4, in the multi-scale fusion module, output data of four different scales are superimposed together and then enter the full-link layer, so as to generate a 6 × 1 vector. The vector is reshaped and then passed through a SoftMax layer to generate a 3 x 2 matrix of coefficients
Figure SMS_8
In the coefficient matrix, the following equation is satisfied:
Figure SMS_9
wherein, c j And d j Represent elements of the coefficient matrix of each scale, and are all [0,1 ]]Within the range of (1). Finally, the displacement vector corresponding to the input subset image can be obtained as follows:
Figure SMS_10
wherein u is 1 Is the abscissa, v, of a small-scale feature vector 1 Is the ordinate, u, of a small-scale feature vector 2 Is the abscissa, v, of the mesoscale feature vector 2 Is the ordinate, u, of the mesoscale feature vector 3 Is a cross of a large scale feature vectorCoordinates, v 3 Is the ordinate of the large scale feature vector.
The invention uses four CNNs with different scales, namely a small-scale CNN, a medium-scale CNN, a large-scale CNN and a full-scale CNN. The output of the small-scale network, the output of the medium-scale network and the output of the large-scale network are accurate in respective ranges, and the output precision of the full-scale network in all ranges is not so high, but is enough to judge the scale to which the result belongs. Thus, by fusing the results of the four networks, an appropriate coefficient matrix can be calculated to determine the correct result. Therefore, the results of the full-scale network, although not participating in the fusion, play an important decision role.
Step 33: the displacement field multi-scale convolution neural network staged training method comprises the following steps: in order to train the proposed multi-scale convolutional neural network model, the patent proposes a step-by-step training mode. Firstly, the feature extraction module and the single-scale output module in fig. 2 are trained, in the process, each single-scale network is trained independently, and the parameters after training and verification are frozen. The multi-scale fusion module is then trained, wherein only the weight parameters of the multi-scale fusion module are updated. In order to train the model, a loss function needs to be constructed to quantify the performance of the model. In this patent, the Mean Square Error (MSE) of the deviation between the predicted and actual values of the model is used to define:
Figure SMS_11
where n is the amount of data used in each iteration step,
Figure SMS_12
is the predicted displacement, based on the ith input speckle image>
Figure SMS_13
Is the actual displacement of the ith input speckle image, i =1,2, \8230;, n. Based on the training data set, the optimal values of the model parameters, including weights W and b, are learned by minimizing the loss values:
Figure SMS_14
when the weight W and the bias b are obtained, the coefficient c is fused i And d i May be calculated by a decision layer fusion module. To minimize the value of the loss function, an algorithm to update the model parameters is necessary to predict the true displacement. This process is known as CNN training. In the patent, a new multi-scale CNN training strategy is provided, and an Adam optimization algorithm is used in the model training process. The initial learning rate was 0.01 and was divided by 10 after every 20 epochs in the optimizer.
In one embodiment, step 1, full field displacement online identification multiscale convolutional neural network dataset acquisition. The data set was from an experiment as shown in figure 5. The experiment was conducted on a MTS809 axial/torsion test system with a tensile rate of 1mm/min. The sample material was 316L steel. The sample surface was sprayed with speckle and recorded with a camera (acA 2440-75 μm, basler) at a frame rate of 5Hz and a resolution of 4032 x 3024 pixels.
And finally, 351 spot images are collected together, including 1 reference image and 350 deformation images, and the strain range of the specimen is 0% -13%. A rectangular area of 916 x 496 pixels is selected from the acquired original image as an analysis area, and the deformed image and the reference image are overlapped to form 350 pieces of overlapped images with speckles. Of these data, 315 images were used to train and validate the CNN model, and 35 images were selected as test data to validate the CNN model.
In step 2, based on the collected images, a data set of single-scale CNN is formed as follows: the subset image is cropped from the stacked speckle images using a sliding sampling strategy. According to the size of the sliding window, a small-scale dataset, a medium-scale dataset, and a large-scale dataset are generated, which have sizes of 2 × 48 × 48, 2 × 68 × 68, and 2 × 132 × 132, respectively. It should be noted that the data volumes of the three data sets are the same, and the data center values and the target values corresponding to different data sets are the same. By sliding the adoption strategy, 703 sub-images are formed from each stacked image. Thus, each of the three data sets contains 315 × 703=221445 sets of input subset images and target values, the target values being in the 0,54 pixel range. These data sets were divided into training and validation sets in the ratio of 7.
In step 3, the average pooling layers were 2 × 2 average pooling; the first small scale convolution module is a 7 x 7 convolution kernel with 15 steps of 1; the second small scale convolution module is 15 3 x 3 convolution kernels with step size 1; the third small scale convolution module is a 3 x 3 convolution kernel with 20 steps of 2; the fourth small scale convolution module is a 3 x 3 convolution kernel with 60 steps of 1; the fifth small scale convolution module is a 3 x 3 convolution kernel with 60 steps of 1; the sixth small scale convolution module is a 3 x 3 convolution kernel with 120 steps of 2; the first mesoscale convolution module is a 7 x 7 convolution kernel with 15 steps of 1; the second mesoscale convolution module is a 3 x 3 convolution kernel with 30 steps of 2; the third mesoscale convolution module is a 30 3 x 3 convolution kernels with step size 1; the fourth mesoscale convolution module is a 3 x 3 convolution kernel with 60 steps of 2; the fifth mesoscale convolution module is a 3 x 3 convolution kernel with 120 steps of 2; the sixth mesoscale convolution module is 120 3 x 3 convolution kernels with step size 1; the first large scale convolution module is a 15 7 x 7 convolution kernel with step size 1; the second large scale convolution module is 15 3 x 3 convolution kernels with step size 1; the third large scale convolution module is a 3 x 3 convolution kernel with 30 steps of 2; the fourth large scale convolution module is 30 3 x 3 convolution kernels with step size 1; the fifth large scale convolution module is a 3 x 3 convolution kernel with 60 steps of 2; the sixth large scale convolution module is 120 3 x 3 convolution kernels with step size of 2; the seventh large scale convolution module is a 3 x 3 convolution kernel with 240 steps of 2; the first full scale convolution module is a 15 7 x 7 convolution kernel with step size 1; the second full scale convolution module is 15 3 x 3 convolution kernels with step size 1; the third full scale convolution module is a 3 x 3 convolution kernel with 30 steps of 2; the fourth full scale convolution module is a 30 by 3 convolution kernels with step size 1; the fifth full scale convolution module is a 3 x 3 convolution kernel with 60 steps of 2; the sixth full scale convolution module is a 3 x 3 convolution kernel with 120 steps of 2; the seventh full scale convolution module is a 3 x 3 convolution kernel with 240 steps of 2.
When the multi-scale CNN model is trained, each single-scale CNN is trained independently, and then the multi-scale fusion module is trained. To improve the prediction accuracy, only part of the data of the above-mentioned single-scale dataset is used when training each single-scale CNN in the multi-scale model. In the multi-scale model, the small-scale sub-dataset is composed of images with target values in the range of [0,8] in a single small-scale dataset; the mesoscale sub-dataset consists of images in a single mesoscale dataset with target values in the range of [4,22 ]. The large-scale sub-dataset consists of images in a single large-scale dataset with target values in the range of [20,54 ]. As a result, a total of 50039 small-scale images, 146236 mesoscale images, and 174189 large-scale images were used to train the small-scale, mesoscale, and large-scale CNNs in the multiscale model, respectively. In addition, the full-scale CNN was trained on the single-scale large-scale dataset described above.
And (3) carrying out full-field displacement online recognition multi-scale convolutional neural network training. For single-scale CNN, a total of 172235 images were used for the model training process, using 8 mini-batch sizes per training iteration. During the training process, the training loss for the three single-scale CNNs converges to a very small value after 30 epochs. The validation dataset (49210 images) was used to validate the training network. The final training loss for the validation dataset is very similar to the loss of training results for 30 epochs. Thus, a training network with 30 epochs is considered as a benchmark for predicting displacements in test images.
For the multi-scale CNN proposed by this patent, the feature extraction module is first trained in the multi-scale model using images from the sub data sets. Subsequently, the three single-scale image sets train the parameters of the multi-scale fusion module. Finally, the training loss for the multi-scale network also converges to a smaller value after 30 epochs. Therefore, in subsequent tests, the displacements in the test set were predicted on a 30 epoch basis.
And (5) verifying the displacement field identification multi-scale convolution neural network model through a verification data set. The validity of the trained CNN described above was verified using the test data set. In single-scale CNN, to evaluate the performance of the model, the small, medium, and large single-scale CNN models predicted speckle images at 1%, 8%, and 13% strain, respectively. In the prediction process, the image is first cropped into small, medium and large scale image subsets by a sliding sampling strategy. Then, the subset images are respectively input into a single-scale CNN, by which the displacement of the center of the subset image can be predicted. Finally, by integrating the displacements of all centers, displacement field results predicted using three different scales of CNNs can be obtained. Figure 6 shows the results of three single-scale CNN predictions. It can be found that the displacement field of the sample in the whole strain range cannot be effectively predicted by the single-scale CNN.
Using the three single-scale CNNs described above, 15 speckle images with strain ranges of 1-13% were predicted. The prediction accuracy is defined as the proportion of the predicted value falling within 2% of the true value. Fig. 7 shows the accuracy of three single-scale CNN models at different strains. It was found that small-scale CNNs performed best when strain was small, medium-scale CNNs performed best when strain was at the median, and large-scale CNNs performed best when strain was large. Therefore, fusing multi-scale information is necessary.
To evaluate the performance of the proposed multi-scale CNN model, the displacement fields of the speckle images at 1%, 8% and 13% strain were predicted using the multi-scale CNN model. Fig. 8 shows the visualization results of the proposed multi-scale CNN model. In the multi-scale CNN prediction process, the input image is first cropped into subset images of three different sizes, 2 × 48 × 48, 2 × 68 × 68, and 2 × 132 × 132. For any point in the image, three subset images including small-scale, medium-scale and large-scale subset images are included. These subset images are input together into a multi-scale CNN model to predict the displacement of each point. By the sliding sampling technique, the entire displacement field can be obtained.
As shown in fig. 8, it can be seen that the values and distribution of the displacement field predicted by the proposed multi-scale CNN model are exactly the same as the actual results, regardless of the small strain case or the large strain case. At 1%, 8% and 13% strain, the average absolute error in the y-direction is equal to 0.039, 0.121 and 0.209 pixels, respectively, and the average absolute error in the x-direction is equal to 0.013, 0.031, and 0.02 pixels, respectively, with very small error values. The prediction accuracy of the multi-scale CNN model under different strains is shown in fig. 7. The results show that the predicted values all fall within 2% of the true values over all strain ranges. Compared with a single-scale model, the multi-scale model has remarkable advantages in prediction accuracy and range.
Embedding the trained model into a multi-scale convolution neural network displacement field online identification intelligent body as shown in figure 9, wherein the device consists of a raspberry group 4B development board, an HQ Camera carrying a 35mm telephoto lens, a 7-inch capacitive screen, a whole device shell, a raspberry group power line, a Type-C line and a screw nut. The development board is placed in the inside of shell, the HQ Camera has been installed to the shell front portion, the convenient adjusting device of telephoto lens has been installed in the supplementary Camera and has been carried out the focusing and shoot when the different distance positions of test piece are place to the last Camera, 7 cun touch capacitive screen have been installed to the shell rear portion, conveniently touch operation and look over crackle test result, raspberry group power cord interface and Type-C line interface are respectively in the left and right sides of shell, connect the back installation of connecting the power and can start, can set up the device and be connected with line or wireless network, connect the high in the clouds server and calculate, promote the calculating speed.
The present invention is not limited to the above embodiments, and those skilled in the art can implement the present invention in other various embodiments according to the disclosure of the embodiments and the drawings, and therefore, all designs that can be easily changed or modified by using the design structure and idea of the present invention fall within the protection scope of the present invention.

Claims (5)

1. A full-field displacement online identification method based on a multi-scale depth convolution neural network is characterized by comprising the following steps:
step 1, establishing a speckle image-displacement data set, and identifying the displacement corresponding to the speckle image;
step 2, dividing the speckle image into data sets with different scales according to the displacement of the speckle image, and cutting the speckle image to a corresponding size;
step 3, constructing a multi-scale convolution neural network based on deep learning, automatically judging the scale of the input speckle pattern, and outputting corresponding displacement;
the multi-scale convolution neural network comprises a feature extraction module and a multi-scale decision fusion module;
the characteristic extraction module comprises a small-scale CNN network, a medium-scale CNN network, a large-scale CNN network and a full-scale CNN network, and respectively outputs a small-scale characteristic vector, a medium-scale characteristic vector, a large-scale characteristic vector and a full-scale characteristic vector;
the multi-scale decision fusion module generates a coefficient matrix from the feature vectors of four different scales, and generates an output vector by fusing the coefficient matrix with the small-scale feature vector, the medium-scale feature vector and the large-scale feature vector.
2. The full-field displacement online identification method according to claim 1, wherein the small-scale CNN network comprises a first small-scale convolution module, an average pooling layer, a second small-scale convolution module, a third small-scale convolution module, a fourth small-scale convolution module, a fifth small-scale convolution module, and a sixth small-scale convolution module, which are connected in sequence;
the mesoscale CNN network comprises a first mesoscale convolution module, an average pooling layer, a second mesoscale convolution module, a third mesoscale convolution module, a fourth mesoscale convolution module, a fifth mesoscale convolution module and a sixth mesoscale convolution module which are connected in sequence;
the large-scale CNN network comprises a first large-scale convolution module, an average pooling layer, a second large-scale convolution module, a third large-scale convolution module, a fourth large-scale convolution module, a fifth large-scale convolution module, a sixth large-scale convolution module and a seventh large-scale convolution module which are connected in sequence;
the full-scale CNN network comprises a first full-scale convolution module, an average pooling layer, a second full-scale convolution module, a third full-scale convolution module, a fourth full-scale convolution module, a fifth full-scale convolution module, a sixth full-scale convolution module and a seventh full-scale convolution module which are connected in sequence;
each convolution module comprises a convolution kernel, a normalization unit and a Relu function unit which are connected in sequence and have different numbers and sizes.
3. The full-field displacement online identification method as claimed in claim 1, wherein in step 3, in the multi-scale fusion module, the output data of four different scales are overlapped together and then enter the full-connection layer to generate a 6 x 1 vector. The vector is reshaped and then passed through a SoftMax layer to generate a 3 x 2 matrix of coefficients
Figure FDA0003929784970000021
In the coefficient matrix, the following equation is satisfied:
Figure FDA0003929784970000022
wherein, c j And d j Elements of the coefficient matrix of each scale are represented and are all in [0,1 ]]Within the range of (1); finally, the displacement vector corresponding to the input subset image can be obtained as follows:
Figure FDA0003929784970000023
/>
wherein u is 1 Is the abscissa, v, of a small-scale feature vector 1 Is the ordinate, u, of a small-scale feature vector 2 Is the abscissa, v, of the mesoscale feature vector 2 Is the ordinate, u, of the mesoscale feature vector 3 Is the abscissa, v, of the large-scale feature vector 3 Is the ordinate of the large scale feature vector.
4. The full-field displacement online identification method according to claim 3, further comprising a displacement field multi-scale convolution neural network staged training method in step 3:
training each single-scale network independently, and freezing parameters after training and verification; and then training the multi-scale fusion module, and only updating the weight parameters of the multi-scale fusion module.
5. The full-field displacement online identification method according to claim 4, characterized in that, in step 3, the Mean Square Error (MSE) of the deviation amount between the predicted value and the actual value of the model is used to define:
Figure FDA0003929784970000031
where n is the amount of data used in each iteration step,
Figure FDA0003929784970000032
is a predicted displacement of the ith input speckle image, based on the predicted displacement>
Figure FDA0003929784970000033
Is the actual displacement of the ith input speckle image, i =1,2, \ 8230;, n;
based on the training data set, the optimal values of the model parameters, including weights W and b, are learned by minimizing the loss values:
Figure FDA0003929784970000034
when the weight W and the bias b are obtained, the coefficient c is fused j And d j May be calculated by a decision layer fusion module.
CN202211391034.9A 2022-11-07 2022-11-07 Full-field displacement online identification method based on multi-scale depth convolution neural network Pending CN115953638A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211391034.9A CN115953638A (en) 2022-11-07 2022-11-07 Full-field displacement online identification method based on multi-scale depth convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211391034.9A CN115953638A (en) 2022-11-07 2022-11-07 Full-field displacement online identification method based on multi-scale depth convolution neural network

Publications (1)

Publication Number Publication Date
CN115953638A true CN115953638A (en) 2023-04-11

Family

ID=87286502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211391034.9A Pending CN115953638A (en) 2022-11-07 2022-11-07 Full-field displacement online identification method based on multi-scale depth convolution neural network

Country Status (1)

Country Link
CN (1) CN115953638A (en)

Similar Documents

Publication Publication Date Title
CN110660052B (en) Hot-rolled strip steel surface defect detection method based on deep learning
CN111259930B (en) General target detection method of self-adaptive attention guidance mechanism
CN110188685B (en) Target counting method and system based on double-attention multi-scale cascade network
CN108229444B (en) Pedestrian re-identification method based on integral and local depth feature fusion
CN110929577A (en) Improved target identification method based on YOLOv3 lightweight framework
CN109086770B (en) Image semantic segmentation method and model based on accurate scale prediction
CN111008618B (en) Self-attention deep learning end-to-end pedestrian re-identification method
CN111126134B (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN111178206A (en) Building embedded part detection method and system based on improved YOLO
CN113822278B (en) License plate recognition method for unlimited scene
CN111191629A (en) Multi-target-based image visibility detection method
CN111062423B (en) Point cloud classification method of point cloud graph neural network based on self-adaptive feature fusion
CN113033520A (en) Tree nematode disease wood identification method and system based on deep learning
CN107832789B (en) Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation
CN116228754B (en) Surface defect detection method based on deep learning and global difference information
CN109543693A (en) Weak labeling data noise reduction method based on regularization label propagation
CN114663769A (en) Fruit identification method based on YOLO v5
CN114723784A (en) Pedestrian motion trajectory prediction method based on domain adaptation technology
CN112462600B (en) High-energy laser control method and system, electronic equipment and storage medium
CN114140469A (en) Depth hierarchical image semantic segmentation method based on multilayer attention
CN116883393B (en) Metal surface defect detection method based on anchor frame-free target detection algorithm
CN113989612A (en) Remote sensing image target detection method based on attention and generation countermeasure network
CN112446305A (en) Pedestrian re-identification method based on classification weight equidistant distribution loss model
CN115953638A (en) Full-field displacement online identification method based on multi-scale depth convolution neural network
CN111899284A (en) Plane target tracking method based on parameterized ESM network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination