CN110738168B - Distributed strain micro crack detection system and method based on stacked convolution self-encoder - Google Patents
Distributed strain micro crack detection system and method based on stacked convolution self-encoder Download PDFInfo
- Publication number
- CN110738168B CN110738168B CN201910974481.9A CN201910974481A CN110738168B CN 110738168 B CN110738168 B CN 110738168B CN 201910974481 A CN201910974481 A CN 201910974481A CN 110738168 B CN110738168 B CN 110738168B
- Authority
- CN
- China
- Prior art keywords
- strain
- encoder
- self
- stacked
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/80—Recognising image objects characterised by unique random patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Neurology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a distributed strain crack detection system and method based on a stacked convolution self-encoder. The method can accurately and inextricably detect the tiny cracks with the opening width of 23 mu m on the surface of the steel structure in the laboratory, and provides an efficient solution for detecting the distributed strain cracks of the structural body.
Description
Technical Field
The invention belongs to the field of pattern recognition, and particularly relates to a distributed strain micro crack detection system and method based on a stacked convolution self-encoder.
Background
Crack detection has been an important issue in the field of structural health monitoring. The crack detection method comprises a manual observation method and a nondestructive detection method. The manual observation method needs special maintenance personnel to use a professional tool to perform periodic inspection, and is low in efficiency and strong in subjectivity. The nondestructive testing method is mainly used for testing the structural body cracks through data obtained by ultrasonic waves, X rays, ground penetrating radars, cameras and the like. These sensors are all point-to-point sensors, and cannot measure the entire data of the structure, and cracks are easily missed.
Disclosure of Invention
The invention aims to provide a distributed strain micro crack detection system and method based on a stacked convolution self-encoder, so as to overcome the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a stacked convolution auto-encoder based distributed strain fracture detection system, comprising:
strain sequence acquisition module: the system is used for acquiring distributed strain of the surface of the structure;
a strain sequence preprocessing module: the strain acquisition device is used for performing z-score standardization on the acquired distributed strain and intercepting the distributed strain into a strain subsequence;
the self-learning and characterization module of the characteristics based on the stacked convolution self-encoder comprises: the system comprises 3 convolution automatic encoder modules and a plurality of strain sub-sequences, wherein the convolution automatic encoder modules are used for extracting the characteristics of the divided strain sub-sequences;
and the Softmax classification and identification module is used for carrying out secondary classification on the extracted subsequence characteristics and judging the probability that each subsequence belongs to a crack subsequence and a non-crack subsequence.
Further, the strain sequence acquisition module: the optical fiber sensor is laid on the surface of the structure, and a distributed optical fiber sensing system based on BOTDA is used for collecting distributed strain on the surface of the structure.
Further, the strain sequence preprocessing module comprises: a z-score normalization module and a sliding window module;
the z-score normalization module normalizes the strain sequences to 0-mean 1 standard deviation data.
The sliding window module cuts the normalized strain sequence through a sliding window with a length of 24 and a step size of 1 into a set of strain subsequences with lengths of 24.
Further, the self-learning and characterization module based on the characteristics of the stacked convolution self-encoder: is composed of 3 convolutional autocoder modules for extracting the characteristics of the divided strain subsequence, and features h of input data x 2 The relationship between them can be expressed as two formulas, which are shown in detail below:
h 2 =pool(h 1 )
wherein h is 1 Features after convolution;is a convolution; w is a convolution kernel; b is a bias vector; h is 2 Outputting the features for the encoder; pool indicates pooling operation; s f Is an activation function in the encoder. Characteristic h 2 And outputThe relationship between them can also be expressed as three formulas, which are shown in detail below:
wherein the content of the first and second substances,andthe convolution kernel and the offset vector for the first convolution in the decoding process,features after the first convolution in the decoder; upsample is an upsampling process;features after upsampling;andconvolution kernel and offset vector, s, for the second convolution in the decoding process g Is an activation function in the decoder.
The process of the feature self-learning and characterization module based on the stacked self-encoder is specifically as follows:
h i,2 =pool(h i,1 )
wherein h is i,1 For the convolved data in the ith encoder, h i,2 For pooled data in the ith encoder, which is also characteristic of the output of the ith encoder, s f Is an activation function. W is a group of i And b i The convolution kernel and the offset vector in the ith encoder.
A distributed strain crack detection method based on a stacked convolution self-encoder comprises the following steps:
step 1: strain sequence collection, namely laying an optical fiber sensor on the surface of a structure, and collecting distributed strain on the surface of the structure by using a distributed optical fiber sensing system based on BOTDA;
step 2: standardizing the acquired strain sequence by using z-score standardization, intercepting the strain sequence by using a sliding window with the length of 24 and the step length of 1 to obtain a strain subsequence, and marking the strain subsequence according to an intercepted position;
and step 3: automatically learning features characterizing the strain subsequence using a neural network based on a stacked convolution auto-encoder;
and 4, step 4: performing secondary classification on the extracted characteristics of the strain subsequence by adopting a Softmax classifier to complete crack detection;
further, the strain sequence acquisition in step 1 specifically comprises the following steps: the optical fiber sensor is adhered to the surface of a structure body through epoxy resin, two ends of the optical fiber are connected to a BOTDA-based distributed optical fiber sensing system, the BOTDA-based distributed optical fiber sensing system measures Brillouin frequency shift of the optical fiber through two light sources, namely pumping light and detection light, and distributed strain of the surface of the structure body is obtained through the linear relation between the Brillouin frequency shift and strain.
Further, the specific process of the strain sequence processing in the step 2 is as follows:
step 2.1: the mean of the collected strain sequences was subtracted by their standard deviations, and the data were obtained as 0 mean 1 standard deviation.
Step 2.2: the strain sequence is truncated into a set of strain subsequences of length 24 by sliding a sliding window of length 24 and step size 1 along the acquired strain sequence.
Step 2.3: marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, marking the strain subsequences at the left side and the right side of the strain subsequences as crack subsequences, and marking the rest strain subsequences as non-crack subsequences.
Further, the specific process of automatically learning and characterizing the strain subsequence features by using the neural network based on the stacked convolution self-encoder in the step 3 is as follows:
step 3.1: and initializing the stacked convolutional self-encoder, and determining the number of layers and the number of neurons of the stacked convolutional self-encoder. Randomly initializing a connection weight matrix and a bias vector in the stacked convolutional auto-encoder. The number of neurons in the input layer is equal to 24, which is the length of the strained subsequence.
Step 3.2: the stacked convolutional autocoder is pre-trained, the stacked convolutional autocoder is composed of 3 convolutional autocoders, and each convolutional autocoder is pre-trained by using the obtained strain subsequence. Loss function of pre-trained convolutional autocoder is the mean square error L between input and output 1 The method comprises the following steps:
wherein x is an input strain subsequence,for convolution of the reconstructed data from the encoder output, M is the number of all incoming strain subsequences, X m 、The mth strain sub-sequence of the input model and the corresponding reconstructed output mth sub-sequence are respectively.
Further, in step 4, a Softmax classifier is adopted to classify the sub-sequences of the variables, and the specific method is as follows:
step 4.1: constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a vector of dimensions t representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
wherein, delta 1 ,δ 2 Are all parameters of the Softmax classifier,z (i) to input, y (i) For output, the probability of the Softmax classifier classifying z into class l is:
wherein z is (i) To input, y (i) Is an output; t denotes the transpose of the matrix.
Step 4.2: pre-training Sofmax, inputting the strain subsequence into a pre-trained stacked convolution self-encoder to obtain output characteristic z (i) In z is (i) And its label category y (i) Pre-training Softmax, wherein a loss function is a cross entropy function, and the method comprises the following specific steps:
wherein the content of the first and second substances,for all the parameters of the Softmax classifier,is the class probability of the output, λ 1 The weight coefficient is a weight coefficient connecting a weight matrix and a bias vector regular term in Softmax, M is the total number of input strain subsequences, and K is the category number and is 2.
Step 4.3: and fine adjustment, namely stacking the coding part of the convolutional self-encoder and then connecting a Softmax classifier, so that the convolutional self-encoder has a classification function. And utilizing the pre-trained strain subsequence to finely adjust a connection weight matrix and an offset vector of the coding part of the stacked convolutional self-encoder and the overall structure of the Softmax classifier. The loss function during trimming is a cross loss function, and specifically, the loss function is as follows:
where ω is the connected weight matrix and offset vector in the stacked convolutional auto-encoder, and Θ is ω and δ, λ 2 The method is characterized in that weight coefficients for connecting a weight matrix and a bias vector regular term in a stacked convolution self-encoder are used.
Step 4.4: the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for feature z of stacked self-encoder output (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention realizes data acquisition by the distributed optical fiber sensor and changes the traditional point-to-point sensing mode. Differences between different data are reduced by normalization. Meanwhile, the contradiction between high spatial resolution and low signal-to-noise ratio of the distributed optical fiber sensor is overcome by a method based on a stacked convolution self-encoder. Stacked convolutional autocoder can extract highly robust, distinguishable features for classification in data with low signal-to-noise ratio. The crack detection device has the advantages that the crack detection device is remarkable in crack detection, can detect the micro cracks, and is improved in the detection effect of the micro cracks.
Drawings
FIG. 1 is a schematic flow diagram of the system of the present invention;
FIG. 2 is a schematic diagram of a convolutional auto-encoder in the present invention
FIG. 3 is a schematic diagram of a stacked convolutional auto-encoder of the present invention;
FIG. 4 is a process schematic of the method of the present invention;
FIG. 5 is a diagram illustrating pre-training and fine-tuning in accordance with the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1 to 5, a distributed strain crack detection system based on a stacked convolution self-encoder includes a strain sequence acquisition module; a strain sequence preprocessing module; a feature self-learning and characterization module based on the stacked convolution self-encoder; and (3) a Softmax classification identification module (the specific flow is shown in figure 1).
The strain sequence acquisition module is used for acquiring the distributed strain of the structural body, and the acquired distributed strain of the structural body is a one-dimensional sequence;
the strain sequence preprocessing module comprises: a z-score normalization module that normalizes strain sequences to 0-mean 1 standard deviation data and a sliding window module. The sliding window module cuts the normalized strain sequence through a sliding window with a length of 24 and a step size of 1 into a set of strain subsequences with lengths of 24. Marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, marking the strain subsequences at the left side and the right side of the strain subsequences as crack subsequences, and marking the rest strain subsequences as non-crack subsequences.
The self-learning and characterization module of the characteristics based on the stacked convolution self-encoder comprises: composed of 3 convolutional self-encoder modules for extracting the characteristics of the divided strain subsequences, and a convolutional self-encoder module for inputting data x, characteristics h 2 The relationship between them can be expressed as three formulas, which are shown in detail below:
h 2 =pool(h 1 )
wherein h is 1 Features after convolution;is a convolution; w is a convolution kernel; b is a bias vector; h is 2 Outputting the features for the encoder; pool indicates pooling operation; s is f Is an activation function in the encoder. Characteristic h 2 And outputThe relationship between them can also be expressed as two formulas, as follows:
wherein the content of the first and second substances,andthe convolution kernel and the offset vector for the first convolution in the decoding process,features after the first convolution in the decoder; upsample is an upsampling process;is the feature after upsampling;andconvolution kernel and offset vector, s, for the second convolution in the decoding process g Is an activation function in the decoder.
The process of the feature self-learning and characterization module based on the stacked self-encoder is specifically as follows:
h i,2 =pool(h i,1 )
wherein h is i,1 For the convolved data in the ith encoder, h i,2 For pooled data in the ith encoder, which is also characteristic of the output of the ith encoder, s f Is an activation function. W i And b i The convolution kernel and the offset vector in the ith encoder.
The Softmax classification module adopts a Softmax classifier to realize classification of the transformer subsequence, and the specific method comprises the following steps:
constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a vector of dimensions t representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
wherein, delta 1 ,δ 2 Are all parameters of the Softmax classifier,z (i) to input, y (i) For output, the probability of the Softmax classifier classifying z into class l is:
wherein z is (i) To input, y (i) Is an output;
the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for stacked convolution from the feature z of the encoder output (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
A distributed strain crack detection method based on a stacked convolution self-encoder comprises the following specific steps as shown in FIG. 5:
1) Acquiring a strain sequence;
2) Standardizing the collected strain sequence by using z-score, intercepting the strain sequence by using a sliding window with the length of 24 and the step length of 1 to obtain a strain subsequence, and marking the strain subsequence according to the intercepted position;
2.1: the mean of the collected strain sequences was subtracted by their standard deviations, and the data were obtained as 0 mean 1 standard deviation.
2.2: the strain sequence is truncated into a set of strain subsequences of length 24 by sliding a sliding window of length 24 and step size 1 along the acquired strain sequence.
2.3: marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, and marking the strain subsequences at the left side and the right side of the strain subsequences as the crack subsequences.
3) Automatically learning features characterizing the strain subsequence with a neural network based on a stacked self-encoder;
3.1: initializing a stacked convolutional self-encoder, and determining the layer number and the neuron number of the stacked convolutional self-encoder. Randomly initializing a connection weight matrix and a bias vector in the stacked convolutional auto-encoder. The number of neurons in the input layer is equal to 24, which is the length of the strained subsequence.
3.2: pre-training a stacked convolutional auto-encoder, the stacked auto-encoder consisting of 3 convolutional auto-encoders, each convolutional auto-encoder being pre-trained with the obtained strain subsequence. The loss function of the pre-training convolutional self-encoder is the mean square error between the input and the output, and is specifically as follows:
wherein x is an input strain subsequence,for convolution of the reconstructed data from the encoder output, M is the number of all incoming strain subsequences, X m 、Respectively the mth strain subsequence of the input model and the corresponding mth subsequence of the reconstructed output.
4) Secondly, performing secondary classification on the extracted characteristics of the strain subsequence by adopting a Softmax classifier to complete crack detection;
4.1: constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a t-dimensional vector representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
wherein, delta 1 ,δ 2 Are all parameters of the Softmax classifier,z (i) to input, y (i) For output, the probability that the Softmax classifier classifies z into class l is:
wherein z is (i) To input, y (i) Is an output;
4.2: pre-training Sofmax, inputting the strain subsequence into the pre-trained stacked convolution self-encoder to obtain the output characteristic z (i) In z is (i) And its label category y (i) Pre-training Softmax, wherein a loss function is a cross entropy function, and the method comprises the following specific steps:
wherein the content of the first and second substances,for all the parameters of the Softmax classifier,is the class probability of the output, λ 1 The weight coefficient is a weight coefficient connecting a weight matrix and a bias vector regular term in Softmax, M is the total number of input strain subsequences, and K is the category number and is 2.
4.3: and fine adjustment, namely stacking the coding part of the convolutional self-encoder and then connecting a Softmax classifier, so that the convolutional self-encoder has a classification function. And utilizing the pre-trained strain subsequence to finely adjust a connection weight matrix and an offset vector of the coding part of the stacked convolutional self-encoder and the overall structure of the Softmax classifier. The loss function during trimming is a cross loss function, and specifically, the loss function is as follows:
where ω is the connected weight matrix and offset vector in the stacked convolutional auto-encoder, and Θ is ω and δ, λ 2 The method is characterized in that weight coefficients for connecting a weight matrix and a bias vector regular term in a stacked convolution self-encoder are used.
4.4: the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for stacked convolution from the feature z of the encoder output (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
Effects of the implementation
The fiber optic sensor was first pre-tensioned and then adhered to the surface of the laboratory steel structure by epoxy. And two ends of the optical fiber sensor are connected with a distributed optical fiber sensing system based on BOTDA, and distributed strain data of the surface of the structural body distributed along the radial direction of the optical fiber sensor are obtained. By adopting the method for detecting the micro cracks based on the stacked convolution self-encoder, the micro cracks with the opening width of 23 mu m can be accurately detected based on the acquired distributed strain data, and the method has good environmental noise robustness and is an effective method for detecting the micro cracks on the surface of the steel structure.
Claims (9)
1. A distributed strain microcrack detection system based on stacked convolutional auto-encoders, comprising:
strain sequence acquisition module: the system is used for acquiring distributed strain of the surface of the structure;
a strain sequence preprocessing module: the strain acquisition device is used for performing z-score standardization on the acquired distributed strain and intercepting the distributed strain into a strain subsequence;
the self-learning and characterization module of the characteristics based on the stacked convolution self-encoder comprises: the system consists of 3 convolutional automatic encoder modules and is used for extracting the characteristics of the divided strain subsequences;
and the Softmax classification and identification module is used for carrying out secondary classification on the extracted subsequence characteristics and judging the probability that each subsequence belongs to a crack subsequence and a non-crack subsequence.
2. The distributed strain micro-crack detection system based on the stacked convolutional self-encoder as claimed in claim 1, wherein the strain sequence acquisition module is specifically: the optical fiber sensor is laid on the surface of the structure, and a distributed optical fiber sensing system based on BOTDA is used for collecting distributed strain on the surface of the structure.
3. The distributed strain microcrack detection system based on stacked convolutional self-encoder of claim 1, wherein the strain sequence preprocessing module comprises a z-score normalization module and a sliding window module;
the z-score normalization module normalizes the distributed strain to data of 0 mean 1 standard deviation;
the sliding window module cuts the normalized strain sequence into a set of strain subsequences of length 24 through a sliding window of length 24 and step size 1.
4. The distributed strain micro-crack detection system based on the stacked convolutional self-encoder as claimed in claim 1, wherein the feature self-learning and characterization module based on the stacked convolutional self-encoder is composed of 3 convolutional self-encoder modules for extracting the features of the divided strain subsequences, and the convolutional self-encoder module is used for inputting data x, features h and the like 2 The relationship between them is specifically as follows:
h 2 =pool(h 1 )
wherein h is 1 Is the convolved feature;is a convolution; w is a convolution kernel; b is a bias vector; h is 2 Outputting the features for the encoder; pool indicates pooling operation; s f Is an activation function in the encoder, feature h 2 And outputThe relationship between them is specifically as follows:
wherein the content of the first and second substances,andthe convolution kernel and the offset vector for the first convolution in the decoding process,features after the first convolution in the decoder; upsample is an upsampling process;features after upsampling;andconvolution kernel and offset vector, s, for the second convolution in the decoding process g Is an activation function in the decoder;
the process of the feature self-learning and characterization module based on the stacked convolution self-encoder is specifically as follows:
h i,2 =pool(h i,1 )
wherein h is i,1 For the convolved data in the ith encoder, h i,2 For pooled data in the ith encoder, which is also characteristic of the output of the ith encoder, s f To activate a function, W i And b i The convolution kernel and the offset vector in the ith encoder.
5. A distributed strain crack detection method based on a stacked convolution self-encoder is characterized by comprising the following steps:
step 1: collecting distributed strain on the surface of the structure;
step 2: carrying out z-score standardization on the acquired distributed strain and intercepting the distributed strain into a strain subsequence;
and step 3: automatically learning features characterizing the strain subsequence using a neural network based on a stacked convolution auto-encoder;
and 4, step 4: and (4) performing secondary classification on the extracted characteristics of the strain subsequence by adopting a Softmax classifier, and completing crack detection.
6. The distributed strain crack detection method based on the stacked convolution self-encoder as claimed in claim 5, wherein the distributed strain acquisition in step 1 specifically comprises: the optical fiber sensor is adhered to the surface of the structure body through epoxy resin, two ends of the optical fiber are connected to a BOTDA-based distributed optical fiber sensing system, the BOTDA-based distributed optical fiber sensing system measures the Brillouin frequency shift of the optical fiber through two light source pumping light and detection light, and the distributed strain of the surface of the structure body is obtained through the linear relation between the Brillouin frequency shift and the strain.
7. The distributed strain crack detection method based on the stacked convolution self-encoder as claimed in claim 5, wherein the step 2 specifically includes:
step 2.1: subtracting the mean value of the acquired distributed strain, and dividing the mean value by the standard deviation to obtain data of 0 mean value 1 standard deviation;
step 2.2: intercepting the normalized strain sequence into a group of strain subsequences with the length of 24 by using a sliding window with the length of 24 and the step size of 1;
step 2.3: marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, marking the strain subsequences at the left side and the right side of the strain subsequences as crack subsequences, and marking the rest strain subsequences as non-crack subsequences.
8. The distributed strain fracture detection method based on the stacked convolutional self-encoder as claimed in claim 5, wherein the specific process of using the neural network based on the stacked convolutional self-encoder to automatically learn and characterize the strain subsequence feature in step 3 is as follows:
step 3.1: initializing a convolution stacking self-encoder, determining the number of layers and the number of neurons of the convolution stacking self-encoder, randomly initializing a connection weight matrix and a bias vector in the convolution stacking self-encoder, and enabling the number of the neurons of an input layer to be equal to the length of a strain subsequence;
step 3.2: pre-training a stacked convolutional auto-encoder, the stacked convolutional auto-encoder is composed of 3 convolutional auto-encoders, each convolutional auto-encoder is pre-trained by the obtained strain subsequence, and the loss function of the pre-trained convolutional auto-encoder is the mean square error L between the input and the output 1 The method comprises the following steps:
9. The distributed strain crack detection method based on the stacked convolution self-encoder as claimed in claim 5, wherein a Softmax classifier is adopted in step 4 to classify the strain subsequences, and the specific method is as follows:
step 4.1: constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a t-dimensional vector representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
wherein the content of the first and second substances,δ 1 ,δ 2 is all the parameters of the Softmax classifier, z (i) To input, y (i) For output, the probability of the Softmax classifier classifying z into class l is:
wherein z is (i) To input, y (i) Is an output; t represents the transpose of the matrix;
step 4.2: pre-training Sofmax, inputting the strain subsequence into the pre-trained stacked convolution self-encoder to obtain the output characteristic z (i) In z is (i) And its label category y (i) Pre-training Softmax, wherein a loss function is a cross entropy function, and the method comprises the following specific steps:
wherein the content of the first and second substances,is the class probability of the output, λ 1 The weight coefficient is a weight coefficient connecting a weight matrix and a bias vector regular term in Softmax, M is the total number of input strain subsequences, K is the category number, and K =2;
step 4.3: utilizing a pre-trained strain subsequence to finely adjust a connection weight matrix and a bias vector of an encoding part of the stacked convolution self-encoder and an overall structure of the Softmax classifier, wherein a loss function during fine adjustment is a cross loss function, and the method specifically comprises the following steps:
where ω is the connected weight matrix and offset vector in the stacked convolutional auto-encoder, and Θ is ω and δ, λ 2 A weight coefficient for connecting a weight matrix and a bias vector regular term in the stacked convolution self-encoder;
step 4.4: the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for stacked self-encoder outputsCharacteristic z of (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974481.9A CN110738168B (en) | 2019-10-14 | 2019-10-14 | Distributed strain micro crack detection system and method based on stacked convolution self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974481.9A CN110738168B (en) | 2019-10-14 | 2019-10-14 | Distributed strain micro crack detection system and method based on stacked convolution self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110738168A CN110738168A (en) | 2020-01-31 |
CN110738168B true CN110738168B (en) | 2023-02-14 |
Family
ID=69268869
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910974481.9A Active CN110738168B (en) | 2019-10-14 | 2019-10-14 | Distributed strain micro crack detection system and method based on stacked convolution self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110738168B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111510205B (en) * | 2020-04-21 | 2022-07-12 | 北京邮电大学 | Optical cable fault positioning method, device and equipment based on deep learning |
CN111754445B (en) * | 2020-06-02 | 2022-03-18 | 国网湖北省电力有限公司宜昌供电公司 | Coding and decoding method and system for optical fiber label with hidden information |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021373B (en) * | 2014-05-27 | 2017-02-15 | 江苏大学 | Semi-supervised speech feature variable factor decomposition method |
CN108932480B (en) * | 2018-06-08 | 2022-03-15 | 电子科技大学 | Distributed optical fiber sensing signal feature learning and classifying method based on 1D-CNN |
CN108876796A (en) * | 2018-06-08 | 2018-11-23 | 长安大学 | A kind of lane segmentation system and method based on full convolutional neural networks and condition random field |
CN109389171B (en) * | 2018-10-11 | 2021-06-25 | 云南大学 | Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology |
-
2019
- 2019-10-14 CN CN201910974481.9A patent/CN110738168B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110738168A (en) | 2020-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112233091B (en) | Wind turbine blade image damage detection and positioning method | |
CN108074231B (en) | Magnetic sheet surface defect detection method based on convolutional neural network | |
CN112884747B (en) | Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network | |
CN108918527A (en) | A kind of printed matter defect inspection method based on deep learning | |
CN110738168B (en) | Distributed strain micro crack detection system and method based on stacked convolution self-encoder | |
CN110060251A (en) | A kind of building surface crack detecting method based on U-Net | |
CN109583295B (en) | Automatic detection method for switch machine notch based on convolutional neural network | |
CN112949817A (en) | Water supply pipe leakage edge equipment detection method based on time convolution neural network | |
CN113866455A (en) | Bridge acceleration monitoring data anomaly detection method, system and device based on deep learning | |
CN103745238A (en) | Pantograph identification method based on AdaBoost and active shape model | |
CN110715929B (en) | Distributed strain micro crack detection system and method based on stacking self-encoder | |
CN114782753A (en) | Lung cancer histopathology full-section classification method based on weak supervision learning and converter | |
CN114714145A (en) | Method for enhancing, comparing, learning and monitoring tool wear state by using Gelam angular field | |
CN114660180A (en) | Sound emission and 1D CNNs-based light-weight health monitoring method and system for medium and small bridges | |
CN114519293A (en) | Cable body fault identification method based on hand sample machine learning model | |
CN117252459A (en) | Fruit quality evaluation system based on deep learning | |
CN106990066B (en) | Method and device for identifying coal types | |
Liu et al. | Automatic terahertz recognition of hidden defects in layered polymer composites based on a deep residual network with transfer learning | |
CN115452957B (en) | Small sample metal damage identification method based on attention prototype network | |
CN112446612B (en) | Assessment method of damage assessment system of soft rigid arm mooring system connection structure | |
CN117830588A (en) | System and method for detecting bamboo chopstick production based on artificial intelligence and for automatic assembly line | |
KR102499986B1 (en) | Method for detecting micro plastic | |
CN116718382A (en) | Bearing early fault online detection method based on contrast learning | |
CN117030722A (en) | Method and system for detecting damage of carbon fiber composite core overhead conductor | |
CN117893812A (en) | Method for rapidly identifying degradable plastic and recyclable device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |