CN110738168B - Distributed strain micro crack detection system and method based on stacked convolution self-encoder - Google Patents

Distributed strain micro crack detection system and method based on stacked convolution self-encoder Download PDF

Info

Publication number
CN110738168B
CN110738168B CN201910974481.9A CN201910974481A CN110738168B CN 110738168 B CN110738168 B CN 110738168B CN 201910974481 A CN201910974481 A CN 201910974481A CN 110738168 B CN110738168 B CN 110738168B
Authority
CN
China
Prior art keywords
strain
encoder
self
stacked
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910974481.9A
Other languages
Chinese (zh)
Other versions
CN110738168A (en
Inventor
宋青松
王浩林
陈禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201910974481.9A priority Critical patent/CN110738168B/en
Publication of CN110738168A publication Critical patent/CN110738168A/en
Application granted granted Critical
Publication of CN110738168B publication Critical patent/CN110738168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Neurology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a distributed strain crack detection system and method based on a stacked convolution self-encoder. The method can accurately and inextricably detect the tiny cracks with the opening width of 23 mu m on the surface of the steel structure in the laboratory, and provides an efficient solution for detecting the distributed strain cracks of the structural body.

Description

Distributed strain micro crack detection system and method based on stacked convolution self-encoder
Technical Field
The invention belongs to the field of pattern recognition, and particularly relates to a distributed strain micro crack detection system and method based on a stacked convolution self-encoder.
Background
Crack detection has been an important issue in the field of structural health monitoring. The crack detection method comprises a manual observation method and a nondestructive detection method. The manual observation method needs special maintenance personnel to use a professional tool to perform periodic inspection, and is low in efficiency and strong in subjectivity. The nondestructive testing method is mainly used for testing the structural body cracks through data obtained by ultrasonic waves, X rays, ground penetrating radars, cameras and the like. These sensors are all point-to-point sensors, and cannot measure the entire data of the structure, and cracks are easily missed.
Disclosure of Invention
The invention aims to provide a distributed strain micro crack detection system and method based on a stacked convolution self-encoder, so as to overcome the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a stacked convolution auto-encoder based distributed strain fracture detection system, comprising:
strain sequence acquisition module: the system is used for acquiring distributed strain of the surface of the structure;
a strain sequence preprocessing module: the strain acquisition device is used for performing z-score standardization on the acquired distributed strain and intercepting the distributed strain into a strain subsequence;
the self-learning and characterization module of the characteristics based on the stacked convolution self-encoder comprises: the system comprises 3 convolution automatic encoder modules and a plurality of strain sub-sequences, wherein the convolution automatic encoder modules are used for extracting the characteristics of the divided strain sub-sequences;
and the Softmax classification and identification module is used for carrying out secondary classification on the extracted subsequence characteristics and judging the probability that each subsequence belongs to a crack subsequence and a non-crack subsequence.
Further, the strain sequence acquisition module: the optical fiber sensor is laid on the surface of the structure, and a distributed optical fiber sensing system based on BOTDA is used for collecting distributed strain on the surface of the structure.
Further, the strain sequence preprocessing module comprises: a z-score normalization module and a sliding window module;
the z-score normalization module normalizes the strain sequences to 0-mean 1 standard deviation data.
The sliding window module cuts the normalized strain sequence through a sliding window with a length of 24 and a step size of 1 into a set of strain subsequences with lengths of 24.
Further, the self-learning and characterization module based on the characteristics of the stacked convolution self-encoder: is composed of 3 convolutional autocoder modules for extracting the characteristics of the divided strain subsequence, and features h of input data x 2 The relationship between them can be expressed as two formulas, which are shown in detail below:
Figure BDA0002233151600000021
h 2 =pool(h 1 )
wherein h is 1 Features after convolution;
Figure BDA0002233151600000022
is a convolution; w is a convolution kernel; b is a bias vector; h is 2 Outputting the features for the encoder; pool indicates pooling operation; s f Is an activation function in the encoder. Characteristic h 2 And output
Figure BDA0002233151600000023
The relationship between them can also be expressed as three formulas, which are shown in detail below:
Figure BDA0002233151600000024
Figure BDA0002233151600000025
Figure BDA0002233151600000026
wherein the content of the first and second substances,
Figure BDA0002233151600000027
and
Figure BDA0002233151600000028
the convolution kernel and the offset vector for the first convolution in the decoding process,
Figure BDA0002233151600000029
features after the first convolution in the decoder; upsample is an upsampling process;
Figure BDA0002233151600000031
features after upsampling;
Figure BDA0002233151600000032
and
Figure BDA0002233151600000033
convolution kernel and offset vector, s, for the second convolution in the decoding process g Is an activation function in the decoder.
The process of the feature self-learning and characterization module based on the stacked self-encoder is specifically as follows:
Figure BDA0002233151600000034
h i,2 =pool(h i,1 )
wherein h is i,1 For the convolved data in the ith encoder, h i,2 For pooled data in the ith encoder, which is also characteristic of the output of the ith encoder, s f Is an activation function. W is a group of i And b i The convolution kernel and the offset vector in the ith encoder.
A distributed strain crack detection method based on a stacked convolution self-encoder comprises the following steps:
step 1: strain sequence collection, namely laying an optical fiber sensor on the surface of a structure, and collecting distributed strain on the surface of the structure by using a distributed optical fiber sensing system based on BOTDA;
step 2: standardizing the acquired strain sequence by using z-score standardization, intercepting the strain sequence by using a sliding window with the length of 24 and the step length of 1 to obtain a strain subsequence, and marking the strain subsequence according to an intercepted position;
and step 3: automatically learning features characterizing the strain subsequence using a neural network based on a stacked convolution auto-encoder;
and 4, step 4: performing secondary classification on the extracted characteristics of the strain subsequence by adopting a Softmax classifier to complete crack detection;
further, the strain sequence acquisition in step 1 specifically comprises the following steps: the optical fiber sensor is adhered to the surface of a structure body through epoxy resin, two ends of the optical fiber are connected to a BOTDA-based distributed optical fiber sensing system, the BOTDA-based distributed optical fiber sensing system measures Brillouin frequency shift of the optical fiber through two light sources, namely pumping light and detection light, and distributed strain of the surface of the structure body is obtained through the linear relation between the Brillouin frequency shift and strain.
Further, the specific process of the strain sequence processing in the step 2 is as follows:
step 2.1: the mean of the collected strain sequences was subtracted by their standard deviations, and the data were obtained as 0 mean 1 standard deviation.
Step 2.2: the strain sequence is truncated into a set of strain subsequences of length 24 by sliding a sliding window of length 24 and step size 1 along the acquired strain sequence.
Step 2.3: marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, marking the strain subsequences at the left side and the right side of the strain subsequences as crack subsequences, and marking the rest strain subsequences as non-crack subsequences.
Further, the specific process of automatically learning and characterizing the strain subsequence features by using the neural network based on the stacked convolution self-encoder in the step 3 is as follows:
step 3.1: and initializing the stacked convolutional self-encoder, and determining the number of layers and the number of neurons of the stacked convolutional self-encoder. Randomly initializing a connection weight matrix and a bias vector in the stacked convolutional auto-encoder. The number of neurons in the input layer is equal to 24, which is the length of the strained subsequence.
Step 3.2: the stacked convolutional autocoder is pre-trained, the stacked convolutional autocoder is composed of 3 convolutional autocoders, and each convolutional autocoder is pre-trained by using the obtained strain subsequence. Loss function of pre-trained convolutional autocoder is the mean square error L between input and output 1 The method comprises the following steps:
Figure BDA0002233151600000041
wherein x is an input strain subsequence,
Figure BDA0002233151600000042
for convolution of the reconstructed data from the encoder output, M is the number of all incoming strain subsequences, X m
Figure BDA0002233151600000043
The mth strain sub-sequence of the input model and the corresponding reconstructed output mth sub-sequence are respectively.
Further, in step 4, a Softmax classifier is adopted to classify the sub-sequences of the variables, and the specific method is as follows:
step 4.1: constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a vector of dimensions t representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
Figure BDA0002233151600000051
wherein, delta 12 Are all parameters of the Softmax classifier,
Figure BDA0002233151600000052
z (i) to input, y (i) For output, the probability of the Softmax classifier classifying z into class l is:
Figure BDA0002233151600000053
wherein z is (i) To input, y (i) Is an output; t denotes the transpose of the matrix.
Step 4.2: pre-training Sofmax, inputting the strain subsequence into a pre-trained stacked convolution self-encoder to obtain output characteristic z (i) In z is (i) And its label category y (i) Pre-training Softmax, wherein a loss function is a cross entropy function, and the method comprises the following specific steps:
Figure BDA0002233151600000054
wherein the content of the first and second substances,
Figure BDA0002233151600000055
for all the parameters of the Softmax classifier,
Figure BDA0002233151600000056
is the class probability of the output, λ 1 The weight coefficient is a weight coefficient connecting a weight matrix and a bias vector regular term in Softmax, M is the total number of input strain subsequences, and K is the category number and is 2.
Step 4.3: and fine adjustment, namely stacking the coding part of the convolutional self-encoder and then connecting a Softmax classifier, so that the convolutional self-encoder has a classification function. And utilizing the pre-trained strain subsequence to finely adjust a connection weight matrix and an offset vector of the coding part of the stacked convolutional self-encoder and the overall structure of the Softmax classifier. The loss function during trimming is a cross loss function, and specifically, the loss function is as follows:
Figure BDA0002233151600000061
where ω is the connected weight matrix and offset vector in the stacked convolutional auto-encoder, and Θ is ω and δ, λ 2 The method is characterized in that weight coefficients for connecting a weight matrix and a bias vector regular term in a stacked convolution self-encoder are used.
Step 4.4: the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for feature z of stacked self-encoder output (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention realizes data acquisition by the distributed optical fiber sensor and changes the traditional point-to-point sensing mode. Differences between different data are reduced by normalization. Meanwhile, the contradiction between high spatial resolution and low signal-to-noise ratio of the distributed optical fiber sensor is overcome by a method based on a stacked convolution self-encoder. Stacked convolutional autocoder can extract highly robust, distinguishable features for classification in data with low signal-to-noise ratio. The crack detection device has the advantages that the crack detection device is remarkable in crack detection, can detect the micro cracks, and is improved in the detection effect of the micro cracks.
Drawings
FIG. 1 is a schematic flow diagram of the system of the present invention;
FIG. 2 is a schematic diagram of a convolutional auto-encoder in the present invention
FIG. 3 is a schematic diagram of a stacked convolutional auto-encoder of the present invention;
FIG. 4 is a process schematic of the method of the present invention;
FIG. 5 is a diagram illustrating pre-training and fine-tuning in accordance with the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1 to 5, a distributed strain crack detection system based on a stacked convolution self-encoder includes a strain sequence acquisition module; a strain sequence preprocessing module; a feature self-learning and characterization module based on the stacked convolution self-encoder; and (3) a Softmax classification identification module (the specific flow is shown in figure 1).
The strain sequence acquisition module is used for acquiring the distributed strain of the structural body, and the acquired distributed strain of the structural body is a one-dimensional sequence;
the strain sequence preprocessing module comprises: a z-score normalization module that normalizes strain sequences to 0-mean 1 standard deviation data and a sliding window module. The sliding window module cuts the normalized strain sequence through a sliding window with a length of 24 and a step size of 1 into a set of strain subsequences with lengths of 24. Marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, marking the strain subsequences at the left side and the right side of the strain subsequences as crack subsequences, and marking the rest strain subsequences as non-crack subsequences.
The self-learning and characterization module of the characteristics based on the stacked convolution self-encoder comprises: composed of 3 convolutional self-encoder modules for extracting the characteristics of the divided strain subsequences, and a convolutional self-encoder module for inputting data x, characteristics h 2 The relationship between them can be expressed as three formulas, which are shown in detail below:
Figure BDA0002233151600000071
h 2 =pool(h 1 )
wherein h is 1 Features after convolution;
Figure BDA0002233151600000072
is a convolution; w is a convolution kernel; b is a bias vector; h is 2 Outputting the features for the encoder; pool indicates pooling operation; s is f Is an activation function in the encoder. Characteristic h 2 And output
Figure BDA0002233151600000073
The relationship between them can also be expressed as two formulas, as follows:
Figure BDA0002233151600000074
Figure BDA0002233151600000075
Figure BDA0002233151600000076
wherein the content of the first and second substances,
Figure BDA0002233151600000077
and
Figure BDA0002233151600000078
the convolution kernel and the offset vector for the first convolution in the decoding process,
Figure BDA0002233151600000079
features after the first convolution in the decoder; upsample is an upsampling process;
Figure BDA0002233151600000081
is the feature after upsampling;
Figure BDA0002233151600000082
and
Figure BDA0002233151600000083
convolution kernel and offset vector, s, for the second convolution in the decoding process g Is an activation function in the decoder.
The process of the feature self-learning and characterization module based on the stacked self-encoder is specifically as follows:
Figure BDA0002233151600000084
h i,2 =pool(h i,1 )
wherein h is i,1 For the convolved data in the ith encoder, h i,2 For pooled data in the ith encoder, which is also characteristic of the output of the ith encoder, s f Is an activation function. W i And b i The convolution kernel and the offset vector in the ith encoder.
The Softmax classification module adopts a Softmax classifier to realize classification of the transformer subsequence, and the specific method comprises the following steps:
constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a vector of dimensions t representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
Figure BDA0002233151600000085
wherein, delta 12 Are all parameters of the Softmax classifier,
Figure BDA0002233151600000086
z (i) to input, y (i) For output, the probability of the Softmax classifier classifying z into class l is:
Figure BDA0002233151600000087
wherein z is (i) To input, y (i) Is an output;
the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for stacked convolution from the feature z of the encoder output (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
A distributed strain crack detection method based on a stacked convolution self-encoder comprises the following specific steps as shown in FIG. 5:
1) Acquiring a strain sequence;
2) Standardizing the collected strain sequence by using z-score, intercepting the strain sequence by using a sliding window with the length of 24 and the step length of 1 to obtain a strain subsequence, and marking the strain subsequence according to the intercepted position;
2.1: the mean of the collected strain sequences was subtracted by their standard deviations, and the data were obtained as 0 mean 1 standard deviation.
2.2: the strain sequence is truncated into a set of strain subsequences of length 24 by sliding a sliding window of length 24 and step size 1 along the acquired strain sequence.
2.3: marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, and marking the strain subsequences at the left side and the right side of the strain subsequences as the crack subsequences.
3) Automatically learning features characterizing the strain subsequence with a neural network based on a stacked self-encoder;
3.1: initializing a stacked convolutional self-encoder, and determining the layer number and the neuron number of the stacked convolutional self-encoder. Randomly initializing a connection weight matrix and a bias vector in the stacked convolutional auto-encoder. The number of neurons in the input layer is equal to 24, which is the length of the strained subsequence.
3.2: pre-training a stacked convolutional auto-encoder, the stacked auto-encoder consisting of 3 convolutional auto-encoders, each convolutional auto-encoder being pre-trained with the obtained strain subsequence. The loss function of the pre-training convolutional self-encoder is the mean square error between the input and the output, and is specifically as follows:
Figure BDA0002233151600000101
wherein x is an input strain subsequence,
Figure BDA0002233151600000102
for convolution of the reconstructed data from the encoder output, M is the number of all incoming strain subsequences, X m
Figure BDA0002233151600000103
Respectively the mth strain subsequence of the input model and the corresponding mth subsequence of the reconstructed output.
4) Secondly, performing secondary classification on the extracted characteristics of the strain subsequence by adopting a Softmax classifier to complete crack detection;
4.1: constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a t-dimensional vector representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
Figure BDA0002233151600000104
wherein, delta 12 Are all parameters of the Softmax classifier,
Figure BDA0002233151600000105
z (i) to input, y (i) For output, the probability that the Softmax classifier classifies z into class l is:
Figure BDA0002233151600000106
wherein z is (i) To input, y (i) Is an output;
4.2: pre-training Sofmax, inputting the strain subsequence into the pre-trained stacked convolution self-encoder to obtain the output characteristic z (i) In z is (i) And its label category y (i) Pre-training Softmax, wherein a loss function is a cross entropy function, and the method comprises the following specific steps:
Figure BDA0002233151600000107
wherein the content of the first and second substances,
Figure BDA0002233151600000108
for all the parameters of the Softmax classifier,
Figure BDA0002233151600000109
is the class probability of the output, λ 1 The weight coefficient is a weight coefficient connecting a weight matrix and a bias vector regular term in Softmax, M is the total number of input strain subsequences, and K is the category number and is 2.
4.3: and fine adjustment, namely stacking the coding part of the convolutional self-encoder and then connecting a Softmax classifier, so that the convolutional self-encoder has a classification function. And utilizing the pre-trained strain subsequence to finely adjust a connection weight matrix and an offset vector of the coding part of the stacked convolutional self-encoder and the overall structure of the Softmax classifier. The loss function during trimming is a cross loss function, and specifically, the loss function is as follows:
Figure BDA0002233151600000111
where ω is the connected weight matrix and offset vector in the stacked convolutional auto-encoder, and Θ is ω and δ, λ 2 The method is characterized in that weight coefficients for connecting a weight matrix and a bias vector regular term in a stacked convolution self-encoder are used.
4.4: the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for stacked convolution from the feature z of the encoder output (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
Effects of the implementation
The fiber optic sensor was first pre-tensioned and then adhered to the surface of the laboratory steel structure by epoxy. And two ends of the optical fiber sensor are connected with a distributed optical fiber sensing system based on BOTDA, and distributed strain data of the surface of the structural body distributed along the radial direction of the optical fiber sensor are obtained. By adopting the method for detecting the micro cracks based on the stacked convolution self-encoder, the micro cracks with the opening width of 23 mu m can be accurately detected based on the acquired distributed strain data, and the method has good environmental noise robustness and is an effective method for detecting the micro cracks on the surface of the steel structure.

Claims (9)

1. A distributed strain microcrack detection system based on stacked convolutional auto-encoders, comprising:
strain sequence acquisition module: the system is used for acquiring distributed strain of the surface of the structure;
a strain sequence preprocessing module: the strain acquisition device is used for performing z-score standardization on the acquired distributed strain and intercepting the distributed strain into a strain subsequence;
the self-learning and characterization module of the characteristics based on the stacked convolution self-encoder comprises: the system consists of 3 convolutional automatic encoder modules and is used for extracting the characteristics of the divided strain subsequences;
and the Softmax classification and identification module is used for carrying out secondary classification on the extracted subsequence characteristics and judging the probability that each subsequence belongs to a crack subsequence and a non-crack subsequence.
2. The distributed strain micro-crack detection system based on the stacked convolutional self-encoder as claimed in claim 1, wherein the strain sequence acquisition module is specifically: the optical fiber sensor is laid on the surface of the structure, and a distributed optical fiber sensing system based on BOTDA is used for collecting distributed strain on the surface of the structure.
3. The distributed strain microcrack detection system based on stacked convolutional self-encoder of claim 1, wherein the strain sequence preprocessing module comprises a z-score normalization module and a sliding window module;
the z-score normalization module normalizes the distributed strain to data of 0 mean 1 standard deviation;
the sliding window module cuts the normalized strain sequence into a set of strain subsequences of length 24 through a sliding window of length 24 and step size 1.
4. The distributed strain micro-crack detection system based on the stacked convolutional self-encoder as claimed in claim 1, wherein the feature self-learning and characterization module based on the stacked convolutional self-encoder is composed of 3 convolutional self-encoder modules for extracting the features of the divided strain subsequences, and the convolutional self-encoder module is used for inputting data x, features h and the like 2 The relationship between them is specifically as follows:
Figure FDA0002233151590000021
h 2 =pool(h 1 )
wherein h is 1 Is the convolved feature;
Figure FDA0002233151590000022
is a convolution; w is a convolution kernel; b is a bias vector; h is 2 Outputting the features for the encoder; pool indicates pooling operation; s f Is an activation function in the encoder, feature h 2 And output
Figure FDA0002233151590000023
The relationship between them is specifically as follows:
Figure FDA0002233151590000024
Figure FDA0002233151590000025
Figure FDA0002233151590000026
wherein the content of the first and second substances,
Figure FDA0002233151590000027
and
Figure FDA0002233151590000028
the convolution kernel and the offset vector for the first convolution in the decoding process,
Figure FDA0002233151590000029
features after the first convolution in the decoder; upsample is an upsampling process;
Figure FDA00022331515900000210
features after upsampling;
Figure FDA00022331515900000211
and
Figure FDA00022331515900000212
convolution kernel and offset vector, s, for the second convolution in the decoding process g Is an activation function in the decoder;
the process of the feature self-learning and characterization module based on the stacked convolution self-encoder is specifically as follows:
Figure FDA00022331515900000213
h i,2 =pool(h i,1 )
wherein h is i,1 For the convolved data in the ith encoder, h i,2 For pooled data in the ith encoder, which is also characteristic of the output of the ith encoder, s f To activate a function, W i And b i The convolution kernel and the offset vector in the ith encoder.
5. A distributed strain crack detection method based on a stacked convolution self-encoder is characterized by comprising the following steps:
step 1: collecting distributed strain on the surface of the structure;
step 2: carrying out z-score standardization on the acquired distributed strain and intercepting the distributed strain into a strain subsequence;
and step 3: automatically learning features characterizing the strain subsequence using a neural network based on a stacked convolution auto-encoder;
and 4, step 4: and (4) performing secondary classification on the extracted characteristics of the strain subsequence by adopting a Softmax classifier, and completing crack detection.
6. The distributed strain crack detection method based on the stacked convolution self-encoder as claimed in claim 5, wherein the distributed strain acquisition in step 1 specifically comprises: the optical fiber sensor is adhered to the surface of the structure body through epoxy resin, two ends of the optical fiber are connected to a BOTDA-based distributed optical fiber sensing system, the BOTDA-based distributed optical fiber sensing system measures the Brillouin frequency shift of the optical fiber through two light source pumping light and detection light, and the distributed strain of the surface of the structure body is obtained through the linear relation between the Brillouin frequency shift and the strain.
7. The distributed strain crack detection method based on the stacked convolution self-encoder as claimed in claim 5, wherein the step 2 specifically includes:
step 2.1: subtracting the mean value of the acquired distributed strain, and dividing the mean value by the standard deviation to obtain data of 0 mean value 1 standard deviation;
step 2.2: intercepting the normalized strain sequence into a group of strain subsequences with the length of 24 by using a sliding window with the length of 24 and the step size of 1;
step 2.3: marking the obtained strain subsequences as cut position mark labels, marking the strain subsequences which are cut by taking the crack as the center as crack subsequences, marking the strain subsequences at the left side and the right side of the strain subsequences as crack subsequences, and marking the rest strain subsequences as non-crack subsequences.
8. The distributed strain fracture detection method based on the stacked convolutional self-encoder as claimed in claim 5, wherein the specific process of using the neural network based on the stacked convolutional self-encoder to automatically learn and characterize the strain subsequence feature in step 3 is as follows:
step 3.1: initializing a convolution stacking self-encoder, determining the number of layers and the number of neurons of the convolution stacking self-encoder, randomly initializing a connection weight matrix and a bias vector in the convolution stacking self-encoder, and enabling the number of the neurons of an input layer to be equal to the length of a strain subsequence;
step 3.2: pre-training a stacked convolutional auto-encoder, the stacked convolutional auto-encoder is composed of 3 convolutional auto-encoders, each convolutional auto-encoder is pre-trained by the obtained strain subsequence, and the loss function of the pre-trained convolutional auto-encoder is the mean square error L between the input and the output 1 The method comprises the following steps:
Figure FDA0002233151590000041
wherein x is an input strain subsequence,
Figure FDA0002233151590000042
for convolution of the reconstructed data from the encoder output, M is the number of all incoming strain subsequences, X m
Figure FDA0002233151590000043
Respectively the mth strain subsequence of the input model and the corresponding reconstructed output mth subsequence.
9. The distributed strain crack detection method based on the stacked convolution self-encoder as claimed in claim 5, wherein a Softmax classifier is adopted in step 4 to classify the strain subsequences, and the specific method is as follows:
step 4.1: constructing a Softmax classifier using a hypothesis function h for a given input z δ (z) for each class l, a probability value p (y = l | z) is estimated, l ∈ {0,1}, assuming a function h δ (z) outputting a t-dimensional vector representing the probability values of the t estimates, t =2, assuming the function h δ (z) is as follows:
Figure FDA0002233151590000044
wherein the content of the first and second substances,
Figure FDA0002233151590000045
δ 12 is all the parameters of the Softmax classifier, z (i) To input, y (i) For output, the probability of the Softmax classifier classifying z into class l is:
Figure FDA0002233151590000046
wherein z is (i) To input, y (i) Is an output; t represents the transpose of the matrix;
step 4.2: pre-training Sofmax, inputting the strain subsequence into the pre-trained stacked convolution self-encoder to obtain the output characteristic z (i) In z is (i) And its label category y (i) Pre-training Softmax, wherein a loss function is a cross entropy function, and the method comprises the following specific steps:
Figure FDA0002233151590000051
wherein the content of the first and second substances,
Figure FDA0002233151590000052
is the class probability of the output, λ 1 The weight coefficient is a weight coefficient connecting a weight matrix and a bias vector regular term in Softmax, M is the total number of input strain subsequences, K is the category number, and K =2;
step 4.3: utilizing a pre-trained strain subsequence to finely adjust a connection weight matrix and a bias vector of an encoding part of the stacked convolution self-encoder and an overall structure of the Softmax classifier, wherein a loss function during fine adjustment is a cross loss function, and the method specifically comprises the following steps:
Figure FDA0002233151590000053
where ω is the connected weight matrix and offset vector in the stacked convolutional auto-encoder, and Θ is ω and δ, λ 2 A weight coefficient for connecting a weight matrix and a bias vector regular term in the stacked convolution self-encoder;
step 4.4: the Softmax classifier receives the features output by the stacked convolutional self-encoder as its input, and outputs a strain subsequence of class 0 or 1,0 representing non-crack, and 1 representing crack; for stacked self-encoder outputsCharacteristic z of (i) Selecting the probability p (y) (i) =l|z (i) (ii) a δ) the largest class i as the class to which the feature corresponds.
CN201910974481.9A 2019-10-14 2019-10-14 Distributed strain micro crack detection system and method based on stacked convolution self-encoder Active CN110738168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910974481.9A CN110738168B (en) 2019-10-14 2019-10-14 Distributed strain micro crack detection system and method based on stacked convolution self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910974481.9A CN110738168B (en) 2019-10-14 2019-10-14 Distributed strain micro crack detection system and method based on stacked convolution self-encoder

Publications (2)

Publication Number Publication Date
CN110738168A CN110738168A (en) 2020-01-31
CN110738168B true CN110738168B (en) 2023-02-14

Family

ID=69268869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910974481.9A Active CN110738168B (en) 2019-10-14 2019-10-14 Distributed strain micro crack detection system and method based on stacked convolution self-encoder

Country Status (1)

Country Link
CN (1) CN110738168B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510205B (en) * 2020-04-21 2022-07-12 北京邮电大学 Optical cable fault positioning method, device and equipment based on deep learning
CN111754445B (en) * 2020-06-02 2022-03-18 国网湖北省电力有限公司宜昌供电公司 Coding and decoding method and system for optical fiber label with hidden information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021373B (en) * 2014-05-27 2017-02-15 江苏大学 Semi-supervised speech feature variable factor decomposition method
CN108932480B (en) * 2018-06-08 2022-03-15 电子科技大学 Distributed optical fiber sensing signal feature learning and classifying method based on 1D-CNN
CN108876796A (en) * 2018-06-08 2018-11-23 长安大学 A kind of lane segmentation system and method based on full convolutional neural networks and condition random field
CN109389171B (en) * 2018-10-11 2021-06-25 云南大学 Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology

Also Published As

Publication number Publication date
CN110738168A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN112233091B (en) Wind turbine blade image damage detection and positioning method
CN108074231B (en) Magnetic sheet surface defect detection method based on convolutional neural network
CN112884747B (en) Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network
CN108918527A (en) A kind of printed matter defect inspection method based on deep learning
CN110738168B (en) Distributed strain micro crack detection system and method based on stacked convolution self-encoder
CN110060251A (en) A kind of building surface crack detecting method based on U-Net
CN109583295B (en) Automatic detection method for switch machine notch based on convolutional neural network
CN112949817A (en) Water supply pipe leakage edge equipment detection method based on time convolution neural network
CN113866455A (en) Bridge acceleration monitoring data anomaly detection method, system and device based on deep learning
CN103745238A (en) Pantograph identification method based on AdaBoost and active shape model
CN110715929B (en) Distributed strain micro crack detection system and method based on stacking self-encoder
CN114782753A (en) Lung cancer histopathology full-section classification method based on weak supervision learning and converter
CN114714145A (en) Method for enhancing, comparing, learning and monitoring tool wear state by using Gelam angular field
CN114660180A (en) Sound emission and 1D CNNs-based light-weight health monitoring method and system for medium and small bridges
CN114519293A (en) Cable body fault identification method based on hand sample machine learning model
CN117252459A (en) Fruit quality evaluation system based on deep learning
CN106990066B (en) Method and device for identifying coal types
Liu et al. Automatic terahertz recognition of hidden defects in layered polymer composites based on a deep residual network with transfer learning
CN115452957B (en) Small sample metal damage identification method based on attention prototype network
CN112446612B (en) Assessment method of damage assessment system of soft rigid arm mooring system connection structure
CN117830588A (en) System and method for detecting bamboo chopstick production based on artificial intelligence and for automatic assembly line
KR102499986B1 (en) Method for detecting micro plastic
CN116718382A (en) Bearing early fault online detection method based on contrast learning
CN117030722A (en) Method and system for detecting damage of carbon fiber composite core overhead conductor
CN117893812A (en) Method for rapidly identifying degradable plastic and recyclable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant