CN116049937A - Cross-domain bridge damage identification method based on deep learning - Google Patents
Cross-domain bridge damage identification method based on deep learning Download PDFInfo
- Publication number
- CN116049937A CN116049937A CN202211650213.XA CN202211650213A CN116049937A CN 116049937 A CN116049937 A CN 116049937A CN 202211650213 A CN202211650213 A CN 202211650213A CN 116049937 A CN116049937 A CN 116049937A
- Authority
- CN
- China
- Prior art keywords
- bridge
- domain
- layer
- finite element
- element model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013135 deep learning Methods 0.000 title claims abstract description 24
- 238000006073 displacement reaction Methods 0.000 claims abstract description 17
- 230000004044 response Effects 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 230000003044 adaptive effect Effects 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 82
- 230000004913 activation Effects 0.000 claims description 24
- 238000011176 pooling Methods 0.000 claims description 18
- 238000009826 distribution Methods 0.000 claims description 9
- 230000009467 reduction Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 239000000654 additive Substances 0.000 claims description 4
- 230000000996 additive effect Effects 0.000 claims description 4
- 230000003746 surface roughness Effects 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 3
- 230000003042 antagnostic effect Effects 0.000 claims description 3
- 230000008485 antagonism Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 150000001875 compounds Chemical class 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 230000004069 differentiation Effects 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- XOFYZVNMUHMLCC-ZPOLXVRWSA-N prednisone Chemical compound O=C1C=C[C@]2(C)[C@H]3C(=O)C[C@](C)([C@@](CC4)(O)C(=O)CO)[C@@H]4[C@@H]3CCC2=C1 XOFYZVNMUHMLCC-ZPOLXVRWSA-N 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000009827 uniform distribution Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract 1
- 230000001537 neural effect Effects 0.000 abstract 1
- 238000004088 simulation Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 230000006978 adaptation Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/23—Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Structural Engineering (AREA)
- Civil Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Architecture (AREA)
- Pure & Applied Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a cross-domain bridge damage identification method based on deep learning, which comprises the following steps: building a vehicle-bridge finite element model-simulating a bridge real structure by adding uncertainty in the bridge finite element model in the vehicle-bridge finite element model-data preprocessing-building a generative countermeasure neural network-training and generating phase-building a dynamic domain countermeasure adaptive network-training phase-utilizing a trained feature extractor F q Sum tag classifierF y For a target domain datasetAnd detecting to obtain a corresponding detection result. By adopting the cross-domain bridge damage identification method based on deep learning, the simulation expansion of bridge displacement response signals generated by the two-axle vehicle when the bridge runs at a constant speed can be realized through the deep generation type countermeasure network, and then the source domain and the target domain features are projected to the same feature space through the dynamic countermeasure adaption network, so that the cross-domain damage feature extraction and adaption are realized, and the bridge damage identification under the cross-domain condition is completed.
Description
Technical Field
The invention relates to a bridge detection technology, in particular to a cross-domain bridge damage identification method based on deep learning.
Background
Since deep learning techniques have powerful and efficient capabilities for learning and predicting large data, data-driven data mining techniques have been developed for structural damage detection in the field of structural health monitoring. Namely, the structural response is used as the input of feature mining, and deep learning can mine damage sensitive features from massive data without specific structural information and is more effective than many traditional methods.
However, the data-driven deep learning approach is far from fully developed in terms of structural damage detection, and its main challenge is the lack of damage data from actual structural markers, as the structural conditions are not known in advance, and some researchers build finite element models of real structures to generate marked damage data that can take into account all possible damage scenarios for network training. However, the finite element model is affected by the real environment and parameters such as boundary conditions, and is difficult to determine and model.
When a deep learning model trained based on a finite element model is applied to an actual structure, differences between the finite element model and the actual structure may cause performance degradation.
For this reason, the prior art developed an unsupervised domain adaptation aimed at handling the data distribution differences between the source domain and the target domain. It can learn knowledge from the labeled source domain and apply intelligently to unlabeled target domains. However, another challenge with unsupervised domain adaptation is: the target domain is required and also a large amount of data is required. In reality, a large amount of data cannot be provided, which is a big bottleneck that the self-adaption of the unsupervised field cannot be applied to the actual scene.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a cross-domain bridge damage identification method based on deep learning, which is based on dynamic field self-adaption after data generalization, so that the problem that a large amount of damage data of a target domain cannot be provided by an actual scene in bridge detection is solved, knowledge of a source domain can be dynamically migrated to the target domain, and the problem of gap between a bridge finite element model and a bridge actual structure due to environment or modeling errors is solved, thereby obtaining a damage identification model which has good identification precision and can be applied to the actual damage identification model.
In order to achieve the above purpose, the invention provides a cross-domain bridge damage identification method based on deep learning, which comprises the following steps:
step 1: establishing a vehicle-bridge finite element model;
step 2: simulating a bridge real structure by adding uncertainty in a bridge finite element model in a vehicle-bridge finite element model;
step 3: data preprocessing
For the obtained source domain dataAnd the target Domain->Carrying out normalization and interpolation processing, keeping all sample space dimensions consistent, and obtaining processed source domain data +.>And target Domain data->
Step 4: build by generator G θ And criticizing device D ω A constitutive, generative antagonistic neural network;
step 5: training and generating;
step 6: construction by feature extractor F q Tag predictor F y Global domain discriminator F g And a local domain discriminator F l The dynamic domain composed resists the adaptive network;
step 7: training;
step 8: obtaining optimal parameters by trainingFeature extractor F of (1) q And tag classifier F y For the target domain dataset->And detecting to obtain a corresponding detection result.
Preferably, the step 1 specifically includes the following steps:
step 1.1: determining bridge parameters, constructing a bridge finite element model by utilizing the bridge parameters, and sequentially numbering [1,2,3, …, C ] after dividing units of the bridge finite element model;
step 1.2: determining two-axis vehicle parameters, constructing a vehicle finite element model according to the two-axis vehicle parameters, and setting an uncertainty vehicle weight m v =m v0 +a 0 ×sin(t 0 ) And uncertain vehicle speed v=v 0 +a 1 ×sin(t 1), wherein a0 、a 1 To change the amplitude, t 0 ,t 1 ∈[0,2π];
Step 1.3: in the nondestructive state of the bridge, the vehicle has an uncertain vehicle weight m v And the uncertain speed v passes through the bridge, and the displacement response of the bridge is calculated and obtained by using a Newmark-beta methodRepeating step 1.3n times to obtain sample set +.>And construct tag-> wherein />
Step 1.4: by compromising the cell stiffness delta E divided in the bridge finite element model 0 ×I 0 Simulating bridge damage, wherein delta is a reduction coefficient;
stiffness reduction is carried out on a No. 1 unit divided by the bridge, and the method for acquiring the bridge displacement response in the step 1.3 is repeated n times to obtain a sample setAnd construct tag->Wherein the sample tag is the unit number of the lesion, i.e
Step 1.5: repeating step 1.4 until all units of bridge division are completed, and finally obtaining a source domain data setAnd tag set->
Preferably, the bridge parameters in step 1.1 include bridge moment of inertia I 0 Modulus of elasticity E 0 Density per linear meter ρ 0 And length L b ;
The two-axis vehicle parameters in step 1.2 include the total weight m v0 Two wheelbases d 1 、d 2 And a running speed v 0 。
Preferably, the moment of inertia I in step 1.1 0 = 1.3901, modulus of elasticity E 0 =3.5×10 10 pa, density per linear meter ρ 0 = 18358 and length L b The bridge finite element model is divided into units and numbered [1,2,3, …,10 ] in sequence =25m and the dividing unit number C=10];
The amplitude of change a in step 1.2 0 =50、a 1 =0.1;
The sampling frequency in step 1.3 is 500Hz;
the reduction coefficient δ=0.75 in step 1.4.
Preferably, the step 2 specifically includes the following steps:
step 2.1: five uncertainties are added:
(1) Simulating the influence of temperature on the bridge by changing the elastic modulus of the bridge finite element model, i.e. E' =e 0 ×(1+ζ 1), wherein ζ1 ∈(-0.05,0.05);
(2) Vertical stiffness E of boundary nodes by setting bridge finite element model v And corner stiffness E r Simulating the boundary condition form of the bridge elastic support;
(3) By varying the moment of inertia I of the bridge finite element model 0 To simulate the geometric error of bridge modeling, i.e., I' =i 0 ×(1+ζ 2), wherein ζ2 Epsilon (-0.03,0.03); at the same time by changing the density rho of the bridge finite element model 0 To simulate the material errors of bridge modeling, i.e., ρ' =ρ 0 ×(1+ζ 3), wherein ζ3 ∈(-0.02,0.02);
(4) Grade A of road surface roughness by setting bridge finite element model 0 Simulating the roughness of the bridge deck;
and (3) performing the method in the step (1.3) by adding the four uncertain bridge finite element models to obtain bridge displacement responseAnd adding in the obtained bridge displacement response: (5) Obeying the mean value to be 0 and the variance to be sigma 2 Gaussian distribution noise gamma-N (0, sigma) 2 ) Obtain->
Step 2.2: repeating the steps 1.3 to 1.4 to obtain a displacement response sample set of bridge damage and each unit damageFinally obtain the target domain dataset +.>
Preferably, in step 4:
the generator G θ Comprising in order l 1 Linear layers c 1 A plurality of transposed convolution layers, each linear layer and each transposed convolution layer except the last layer are added with a linear activation layer and a normalization layer, wherein the convolution kernel of the transposed convolution layer has a size of k 1 The number of convolution kernels is h 1 Step length s 1 ;
The criticizing device D ω Comprising c in order 2 Each convolution layer/ 2 Each linear layer is added with an activation layer, a regularization layer and a maximum pooling layer except the last linear layer, wherein the convolution kernel of the convolution layer is k 2 The number of convolution kernels is h 2 Step length s 2 The convolution kernel size of the max pooling layer is k 3 Step length s 3 。
Preferably, the step 5 specifically includes the following steps:
step 5.1: training phase
Generating input to an antagonism network as target domain dataAnd randomly generating a noise vector z conforming to the Gaussian distribution, and inputting the noise vector z to the generator G θ The resulting sample +.>
Step 5.3: sample x, interpolation sample of target fieldAnd generate sample->Input criticizing device D ω Calculating a loss function L:
step 5.4: counter-propagating gradient descent of the loss function L by using an adaptive optimizer Adam, and solving the criticizer D in the loss function ω Is the current optimum parameter of (a)
Step 5.5: randomly generating a batch of noise sets containing p Gaussian-distribution-compliant noise setsCalculating a loss function F:
step 5.6: loss function using adaptive optimizer AdamCounter-propagating gradient descent of the number F, solving the generator G in the loss function θ Is the current optimum parameter of (a)
Step 5.7: repeating the steps 5.3-5.6 until the criticizing device D is obtained when the loss functions L and F are converged to the optimal value ω Sum generator G θ Optimum parameters of (a) and />
Step 5.8: stage of generation
Randomly generating n Gaussian-distributed noiseInput already trained generator->In (2) obtaining an extended target domain sample set +.>
Step 5.9: repeating steps 5.1-5.8, and the target domain dataAs input, a trained generator for each category is obtained>Expansion data set of each category +.>Finally, an extended data set +.>
Preferably, in step 6:
the feature extractor F q Sequentially packInclude c 3 And adding an activation layer, a regularization layer and a maximum pooling layer after each convolution layer, wherein the convolution kernel of the convolution layers is k 4 Step length s 4 The convolution kernel size of the max pooling layer is k 5 Step length s 5 ;
The tag predictor F y Comprising in order l 3 A plurality of linear layers;
the global domain discriminator F g Comprising in order l 4 A plurality of linear layers, two linear layers and an intermediate additive activation layer;
the local domain discriminator F l Comprising in order l 5 The linear layers, the two linear layers and the middle adding active layer.
Preferably, the step 7 specifically includes the following steps:
step 7.1: from a source domain datasetRandomly extracting a batch of n-containing s Sample->Sequentially through feature extractor F q And tag classifier F y Calculating a tag loss function L y :
step 7.2: setting domain labels
Step 7.3: respectively from source domain data setsRandomly extracting a batch of n-containing s Sample->And +.>Randomly extracting a batch of n-containing t Sample->Sequentially through feature extractor F q And a global domain discriminator F g Computing a global domain loss function L g :
step 7.4: respectively from source domain data setsRandomly extracting a batch of n-containing s Sample->And +.>Randomly extracting a batch of n-containing t Sample->Sequentially through feature extractor F q And a local domain discriminator F l Parallel warp type (6)And equation (7) calculate the local loss function for each class +.>And a local area total loss function L c :
Step 7.5: calculating the A-distance of the global domain discriminator and the local domain discriminator using the formulas (8) and (9), respectively, to obtain d g 、d l ,
d g =2(1-2(L g )) (8)
Step 7.6: calculating a dynamic factor kappa:
step 7.7: to sum up the above-mentioned loss functions, an objective function M is calculated:
wherein :θq ,θ y ,θ g ,Respectively feature extractor F q Label classifier F y Global domain discriminator F g Local domain discriminator F l Parameters of (2);
step 7.8: setting a random gradient descent SGD optimizer, carrying out counter propagation gradient descent on an objective function M, and solving a feature extractor F in the objective function M q Label classifier F y Global domain discriminator F g Local domain discriminator F l Is the current optimum parameter of (a)
Step 7.9: repeating steps 7.1-7.8 until the feature extractor F is obtained when the objective function M converges to an optimum q Label classifier F y Global domain discriminator F g Local domain discriminator F l Optimum parameters of (a)
Preferably, in step 4 l 1 1, c 1 Taking 3, convolution kernel size k 1 Number of convolution kernels h =3 1 Sequentially taking 64,16,3 and step s 1 =2, the first two take-out LeakyRule functions of the activation function, and the last layer of activation function is Sigmoid function;
c 2 taking 2, convolution kernel size k 2 Number of convolution kernels h =16 2 Sequentially taking 128,64 and step s 2 =2, max pooling layer convolution kernel size k 3 =4, step size s 3 =4,l 2 Taking 2, wherein the activating function takes a LeakyRule function, and the regularization layer is a Dropout layer;
step 6 c 3 Taking 5, the convolution kernel size of the convolution layer is k 4 The value of the compound is 16,64,128,256,512, and the step length is s 4 =1, the convolution kernel size of the max pooling layer is k 5 =4, step size s 5 =2, the activation function takes the LeakyRule function.
Therefore, the method for identifying the damage of the cross-domain bridge based on deep learning has the following beneficial effects:
1. the generated type countermeasure network generated sample has the advantages of diversity and difficulty in becoming noise, and is identical and distributed with the dimension of the target sample, so that the defect that a large amount of data is needed for providing the deep learning network in a real scene is overcome.
2. The adopted dynamic domain adaptation network can dynamically realize the joint distribution of Ji Yuanyu and target domain data, and can adapt to each application scene of bridge damage identification in reality.
3. A large number of sensors are not required to be arranged to obtain a plurality of position responses, and damage characteristics of the bridge can be obtained only by a small number of position responses, so that the cost of bridge damage detection is greatly reduced.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a finite element model diagram of a vehicle-bridge of the present invention;
FIG. 3 is a diagram of a generated countermeasure network according to the present invention;
FIG. 4 is a T-SNE diagram of the generated data and the target domain data of the present invention;
FIG. 5 is a diagram of a dynamic domain counter-adaptive network according to the present invention;
fig. 6 is a diagram of the bridge damage detection result of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that, while the present embodiment provides a detailed implementation and a specific operation process on the premise of the present technical solution, the protection scope of the present invention is not limited to the present embodiment.
The cross-domain bridge damage identification method based on deep learning comprises the following steps:
step 1: establishing a vehicle-bridge finite element model;
preferably, the step 1 specifically includes the following steps:
step 1.1: determining bridge parameters, constructing a bridge finite element model by utilizing the bridge parameters, and sequentially numbering [1,2,3, …, C ] after dividing units of the bridge finite element model;
preferably, the bridge parameters in step 1.1 include bridge moment of inertia I 0 Modulus of elasticity E 0 Density per linear meter ρ 0 And length L b ;
Preferably, the moment of inertia I in step 1.1 0 = 1.3901, modulus of elasticity E 0 =3.5×10 10 pa, density per linear meter ρ 0 = 18358 and length L b The bridge finite element model is divided into units and numbered [1,2,3, …,10 ] in sequence =25m and the dividing unit number C=10];
Step 1.2: determining two-axis vehicle parameters, constructing a vehicle finite element model according to the two-axis vehicle parameters, and setting an uncertainty vehicle weight m v =m v0 +a 0 ×sin(t 0 ) And uncertain vehicle speed v=v 0 +a 1 ×sin(t 1), wherein a0 、a 1 To change the amplitude, t 0 ,t 1 ∈[0,2π]The method comprises the steps of carrying out a first treatment on the surface of the The amplitude of change a in step 1.2 0 =50、a 1 =0.1;
The two-axis vehicle parameters in step 1.2 include the total weight m v0 Two wheelbases d 1 、d 2 And a running speed v 0 。
Total weight m in this example v0 =18000 kg, two wheelbases d 1 =1.95m、d 2 =1.05m and travel speed v 0 =10m/s,
Step 1.3: in the nondestructive state of the bridge, the vehicle has an uncertain vehicle weight m v And the uncertain speed v passes through the bridge, and the displacement response of the bridge is calculated and obtained by using a Newmark-beta methodThe invention extracts the displacement response of the 1,5,9 unit nodes as a sample, and repeats the steps 1.3n times (n=1250 in the embodiment) to obtain a sample set ∈ ->And constructing the tag wherein />
The sampling frequency in step 1.3 is 500Hz;
step 1.4: by compromising the cell stiffness delta E divided in the bridge finite element model 0 ×I 0 Simulating bridge damage, wherein delta is a reduction coefficient; the reduction coefficient δ=0.75 in step 1.4.
Stiffness reduction is carried out on a number 1 unit divided by the bridge, and the method for obtaining the bridge displacement response in the step 1.3 is repeated n times (n=1250 in the embodiment) to obtain a sample setAnd construct tag->Wherein the sample tag is the injured unit number, i.e. +.>
Step 1.5: repeating step 1.4 until all units of bridge division are completed, and finally obtaining a source domain data setAnd tag set->
Step 2: simulating a bridge real structure by adding uncertainty in a bridge finite element model in a vehicle-bridge finite element model;
preferably, the step 2 specifically includes the following steps:
step 2.1: five uncertainties are added:
(1) Simulating the influence of temperature on the bridge by changing the elastic modulus of the bridge finite element model, i.e. E' =e 0 ×(1+ζ 1), wherein ζ1 Epsilon (-0.05,0.05); zeta in this example 1 =0.04;
(2) Vertical stiffness E of boundary nodes by setting bridge finite element model v And corner stiffness E r Simulating the boundary condition form of the bridge elastic support; vertical stiffness E of boundary node in this embodiment v =1.95×10 11 N/m and corner stiffness E r =1800N□m;
(3) By varying the moment of inertia I of the bridge finite element model 0 To simulate the geometric error of bridge modeling, i.e., I' =i 0 ×(1+ζ 2), wherein ζ2 E (-0.03,0.03), ζ in this example 2 -0.09; at the same time by changing the density rho of the bridge finite element model 0 To simulate the material errors of bridge modeling, i.e., ρ' =ρ 0 ×(1+ζ 3), wherein ζ3 E (-0.02,0.02), ζ in this example 3 =0.02;
(4) Grade A of road surface roughness by setting bridge finite element model 0 Simulating the roughness of the bridge deck; grade a of road surface roughness in this embodiment 0 =16
And (3) performing the method in the step (1.3) by adding the four uncertain bridge finite element models to obtain bridge displacement responseAnd adding in the obtained bridge displacement response: (5) Obeying the mean value to be 0 and the variance to be sigma 2 Gaussian distribution noise y ≡n (0, σ) 2 ) Obtain->
Step 2.2: repeating the steps 1.3 to 1.4 to obtain a displacement response sample set of bridge damage and each unit damageFinally obtain the target domain dataset +.>Since only a small number of samples exist in reality, the target domain sample number m=250 in the present embodiment
Step 3: data preprocessing
For the obtained source domain dataAnd the target Domain->Carrying out normalization and interpolation processing, keeping all sample space dimensions consistent, and obtaining processed source domain data +.>And target Domain data->
Step 4: build by generator G θ And criticizing device D ω A constitutive, generative antagonistic neural network;
preferably, in step 4:
the generator G θ Comprising in order l 1 Linear layers c 1 A plurality of transposed convolution layers, each linear layer and each transposed convolution layer except the last layer are added with a linear activation layer and a normalization layer, wherein the convolution kernel of the transposed convolution layer has a size of k 1 The number of convolution kernels is h 1 Step length s 1 ;
Preferably, in step 4 l 1 1, c 1 Taking 3, convolution kernel size k 1 Number of convolution kernels h =3 1 Sequentially taking 64,16,3 and step s 1 =2, the first two take-out LeakyRule functions of the activation function, and the last layer of activation function is Sigmoid function;
the criticizing device D ω Comprising c in order 2 Each convolution layer/ 2 Each linear layer is added with an activation layer, a regularization layer and a maximum pooling layer except the last linear layer, wherein the convolution kernel of the convolution layer is k 2 The number of convolution kernels is h 2 Step length s 2 The convolution kernel size of the max pooling layer is k 3 Step size ofs 3 。
c 2 Taking 2, convolution kernel size k 2 Number of convolution kernels h =16 2 Sequentially taking 128,64 and step s 2 =2, max pooling layer convolution kernel size k 3 =4, step size s 3 =4,l 2 Taking 2, wherein the activating function takes a LeakyRule function, and the regularization layer is a Dropout layer;
step 5: training and generating;
preferably, the step 5 specifically includes the following steps:
step 5.1: training phase
Generating input to an antagonism network as target domain dataAnd randomly generating a noise vector z subject to a gaussian distribution, the length of the noise vector in this embodiment being z=400, the gaussian distribution subject to a mean value being 0, the variance being 1, and inputting the noise vector z to the generator G θ The resulting sample +.>
Step 5.3: sample x, interpolation sample of target fieldAnd generate sample->Input criticizing device D ω Calculating a loss function L:
step 5.4: counter-propagating gradient descent of the loss function L by using an adaptive optimizer Adam, and solving the criticizer D in the loss function ω Is the current optimum parameter of (a)
Step 5.5: randomly generating a batch of noise sets containing p Gaussian-distribution-compliant noise setsIn this embodiment, p=16, and the loss function F is calculated:
step 5.6: counter-propagating gradient descent of a loss function F using an adaptive optimizer Adam, solving a generator G in the loss function θ Is the current optimum parameter of (a)The optimizer learning rate in this embodiment is set to 0.0001;
step 5.7: repeating the steps 5.3-5.6 until the criticizing device D is obtained when the loss functions L and F are converged to the optimal value ω Sum generator G θ Optimum parameters of (a) and />
Step 5.8: stage of generation
Randomly generating n subject gaussianDistributed noiseInput already trained generator->In (2) obtaining an extended target domain sample set +.>Generating the same number of samples as the source domain data in this example;
step 5.9: repeating steps 5.1-5.8, and the target domain dataAs input, a trained generator for each category is obtained>Expansion data set of each category +.>Finally, an extended data set +.>
Step 6: construction by feature extractor F q Tag predictor F y Global domain discriminator F g And a local domain discriminator F l The dynamic domain composed resists the adaptive network;
preferably, in step 6:
the feature extractor F q Comprising c in order 3 And adding an activation layer, a regularization layer and a maximum pooling layer after each convolution layer, wherein the convolution kernel of the convolution layers is k 4 Step length s 4 The convolution kernel size of the max pooling layer is k 5 Step length s 5 ;
Step 6 c 3 Taking 5, the convolution kernel size of the convolution layer is k 4 The value of the compound is 16,64,128,256,512, and the step length is s 4 =1, the convolution kernel size of the max pooling layer is k 5 =4, step size s 5 =2, the activation function takes the LeakyRule function.
The tag predictor F y Comprising in order l 3 Linear layers, in this example l 3 =2;
The global domain discriminator F g Comprising in order l 4 A plurality of linear layers, two linear layers and an intermediate additive activation layer, in this example l 4 =2, the activation function takes a LeakyRule function;
the local domain discriminator F l Comprising in order l 5 A plurality of linear layers, two linear layers and an intermediate additive activation layer, in this example l 5 =2, the activation function takes the LeakyRule function.
Step 7: training;
preferably, the step 7 specifically includes the following steps:
step 7.1: from a source domain datasetRandomly extracting a batch of n-containing s Sample->Sequentially through feature extractor F q And tag classifier F y Calculating a tag loss function L y :
step 7.2: setting domain labels
Step 7.3: respectively from source domain data setsRandomly extracting a batch of n-containing s Sample->And +.>Randomly extracting a batch of n-containing t Sample->Sequentially through feature extractor F q And a global domain discriminator F g Computing a global domain loss function L g :
step 7.4: respectively from source domain data setsRandomly extracting a batch of n-containing s Sample->And +.>Randomly extracting a batch of n-containing t Sample->N in the present embodiment t =32, sequentially through feature extractor F q And a local domain discriminator F l And calculating the local loss function of each class by the equation (6) and the equation (7)>And a local area total loss function L c :
step 7.5: calculating the A-distance of the global domain discriminator and the local domain discriminator using the formulas (8) and (9), respectively, to obtain d g 、d l ,
d g =2(1-2(L g )) (8)
Step 7.6: calculating a dynamic factor kappa:
step 7.7: to sum up the above-mentioned loss functions, an objective function M is calculated:
wherein :θq ,θ y ,θ g ,Respectively feature extractor F q Label classifier F y Global domain discriminator F g Local domain discriminator F l Parameters of (2);
step 7.8: setting a random gradient descent SGD optimizer, in this embodiment, setting the learning rate of the SGD optimizer to be 0.0001 and the momentum to be 0.9, performing counter-propagation gradient descent on the objective function M, and solving a feature extractor F in the objective function M q Label classifier F y Global domain discriminator F g Local domain discriminator F l Is the current optimum parameter of (a)
Step 7.9: repeating steps 7.1-7.8 until the feature extractor F is obtained when the objective function M converges to an optimum q Label classifier F y Global domain discriminator F g Local domain discriminator F l Optimum parameters of (a)
Step 8: obtaining optimal parameters by trainingFeature extractor F of (1) q And tag classifier F y For the target domain dataset->The detection was performed to obtain a corresponding detection result, and it should be noted that the final result was represented by a confusion matrix as shown in fig. 6, and the average accuracy of the detection result was found to be 83.60%.
Therefore, the method for identifying the cross-domain bridge damage based on the deep learning can perform data expansion on the target domain data to generate a large amount of pseudo data which is similar to the target domain data and distributed in the same way, thereby providing the required target domain data for the self-adaptive method in the unsupervised domain, and then participating in learning, and providing a foundation for self-adaptive application of the unsupervised domain to actual scenes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.
Claims (10)
1. A cross-domain bridge damage identification method based on deep learning is characterized by comprising the following steps of: the method comprises the following steps:
step 1: establishing a vehicle-bridge finite element model;
step 2: simulating a bridge real structure by adding uncertainty in a bridge finite element model in a vehicle-bridge finite element model;
step 3: data preprocessing
For the obtained source domain dataAnd the target Domain->Carrying out normalization and interpolation processing, keeping all sample space dimensions consistent, and obtaining processed source domain data +.>And target Domain data->
Step 4: build by generator G θ And criticizing device D ω A constitutive, generative antagonistic neural network;
step 5: training and generating;
step 6: construction by feature extractor F q Tag predictor F y Global domain discriminator F g And a local domain discriminator F l The dynamic domain composed resists the adaptive network;
step 7: training;
2. The deep learning-based cross-domain bridge damage identification method as claimed in claim 1, wherein: the step 1 specifically comprises the following steps:
step 1.1: determining bridge parameters, constructing a bridge finite element model by utilizing the bridge parameters, and sequentially numbering [1,2,3, …, C ] after dividing units of the bridge finite element model;
step 1.2: determining two-axis vehicle parameters, constructing a vehicle finite element model according to the two-axis vehicle parameters, and setting an uncertainty vehicle weight m v =m v0 +a 0 ×sin(t 0 ) And uncertain vehicle speed v=v 0 +a 1 ×sin(t 1), wherein a0 、a 1 To change the amplitude, t 0 ,t 1 ∈[0,2π];
Step 1.3: in the nondestructive state of the bridge, the vehicle has an uncertain vehicle weight m v And the uncertain speed v passes through the bridge, and the displacement response of the bridge is calculated and obtained by using a Newmark-beta methodRepeating step 1.3n times to obtain sample set +.>And construct tag-> wherein />
Step 1.4: by compromising the cell stiffness delta E divided in the bridge finite element model 0 ×I 0 Simulating bridge damage, wherein delta is a reduction coefficient;
stiffness reduction is carried out on a No. 1 unit divided by the bridge, and the method for acquiring the bridge displacement response in the step 1.3 is repeated n times to obtain a sample setAnd construct tag->Wherein the sample tag is the injured unit number, i.e. +.>
3. The deep learning-based cross-domain bridge damage identification method as claimed in claim 2, wherein: the bridge parameters in step 1.1 include the bridge moment of inertia I 0 Modulus of elasticity E 0 Density per linear meter ρ 0 And length L b ;
The two-axis vehicle parameters in step 1.2 include the total weight m v0 Two wheelbases d 1 、d 2 And a running speed v 0 。
4. A method according to claim 3The cross-domain bridge damage identification method based on deep learning is characterized by comprising the following steps of: moment of inertia I in step 1.1 0 = 1.3901, modulus of elasticity E 0 =3.5×10 10 pa, density per linear meter ρ 0 = 18358 and length L b The bridge finite element model is divided into units and numbered [1,2,3, …,10 ] in sequence =25m and the dividing unit number C=10];
The amplitude of change a in step 1.2 0 =50、a 1 =0.1;
The sampling frequency in step 1.3 is 500Hz;
the reduction coefficient δ=0.75 in step 1.4.
5. The deep learning-based cross-domain bridge damage identification method as claimed in claim 4, wherein: the step 2 specifically comprises the following steps:
step 2.1: five uncertainties are added:
(1) Simulating the influence of temperature on the bridge by changing the elastic modulus of the bridge finite element model, i.e. E' =e 0 ×(1+ζ 1), wherein ζ1 ∈(-0.05,0.05);
(2) Vertical stiffness E of boundary nodes by setting bridge finite element model v And corner stiffness E r Simulating the boundary condition form of the bridge elastic support;
(3) By varying the moment of inertia I of the bridge finite element model 0 To simulate the geometric error of bridge modeling, i.e., I' =i 0 ×(1+ζ 2), wherein ζ2 Epsilon (-0.03,0.03); at the same time by changing the density rho of the bridge finite element model 0 To simulate the material errors of bridge modeling, i.e., ρ' =ρ 0 ×(1+ζ 3), wherein ζ3 ∈(-0.02,0.02);
(4) Grade A of road surface roughness by setting bridge finite element model 0 Simulating the roughness of the bridge deck;
and (3) performing the method in the step (1.3) by adding the four uncertain bridge finite element models to obtain bridge displacement responseAnd adding in the obtained bridge displacement response: (5) Obeying the mean value to be 0 and the variance to be sigma 2 Gaussian distribution noise gamma-N (0, sigma) 2 ) Obtain->
6. The deep learning-based cross-domain bridge damage identification method as claimed in claim 5, wherein: in step 4:
the generator G θ Comprising in order l 1 Linear layers c 1 A plurality of transposed convolution layers, each linear layer and each transposed convolution layer except the last layer are added with a linear activation layer and a normalization layer, wherein the convolution kernel of the transposed convolution layer has a size of k 1 The number of convolution kernels is h 1 Step length s 1 ;
The criticizing device D ω Comprising c in order 2 Each convolution layer/ 2 Each linear layer is added with an activation layer, a regularization layer and a maximum pooling layer except the last linear layer, wherein the convolution kernel of the convolution layer is k 2 The number of convolution kernels is h 2 Step length s 2 The convolution kernel size of the max pooling layer is k 3 Step length s 3 。
7. The deep learning-based cross-domain bridge damage identification method as claimed in claim 6, wherein: the step 5 specifically comprises the following steps:
step 5.1: training phase
Generating input to an antagonism network as target domain dataAnd randomly generating a noise vector z conforming to the Gaussian distribution, and inputting the noise vector z to the generator G θ The resulting sample +.>
Step 5.3: sample x, interpolation sample of target fieldAnd generate sample->Input criticizing device D ω Calculating a loss function L: />
step 5.4: loss function L using adaptive optimizer AdamCounter-propagating gradient descent, solving criticizer D in loss function ω Is the current optimum parameter of (a)
Step 5.5: randomly generating a batch of noise sets containing p Gaussian-distribution-compliant noise setsCalculating a loss function F:
step 5.6: counter-propagating gradient descent of a loss function F using an adaptive optimizer Adam, solving a generator G in the loss function θ Is the current optimum parameter of (a)
Step 5.7: repeating the steps 5.3-5.6 until the criticizing device D is obtained when the loss functions L and F are converged to the optimal value ω Sum generator G θ Optimum parameters of (a) and />
Step 5.8: stage of generation
Randomly generating n Gaussian-distributed noiseInput already trained generator->In (2) obtaining an extended target domain sample set +.>
8. The deep learning-based cross-domain bridge damage identification method as claimed in claim 7, wherein: in step 6:
the feature extractor F q Comprising c in order 3 And adding an activation layer, a regularization layer and a maximum pooling layer after each convolution layer, wherein the convolution kernel of the convolution layers is k 4 Step length s 4 The convolution kernel size of the max pooling layer is k 5 Step length s 5 ;
The tag predictor F y Comprising in order l 3 A plurality of linear layers;
the global domain discriminator F g Comprising in order l 4 A plurality of linear layers, two linear layers and an intermediate additive activation layer;
the local domain discriminator F l Comprising in order l 5 The linear layers, the two linear layers and the middle adding active layer.
9. The deep learning-based cross-domain bridge damage identification method as claimed in claim 8, wherein: the step 7 specifically comprises the following steps:
step 7.1: from a source domain datasetRandomly extracting a batch of n-containing s Sample->Sequentially through feature extractor F q And tag classifier F y Calculating a tag loss function L y :
step 7.2: setting domain labels
Step 7.3: respectively from source domain data setsRandomly extracting a batch of n-containing s Sample->And +.>Randomly extracting a batch of n-containing t Sample->Sequentially through feature extractor F q And a global domain discriminator F g Computing a global domain loss function L g :
step 7.4: respectively from source domain data setsRandomly extracting a batch of n-containing s Sample->And +.>Randomly extracting a batch of n-containing t Sample->Sequentially through feature extractor F q And a local domain discriminator F l And calculating the local loss function of each class by the equation (6) and the equation (7)>And a local area total loss function L c :
step 7.5: calculating the A-distance of the global domain discriminator and the local domain discriminator using the formulas (8) and (9), respectively, to obtain d g 、d l ,
d g =2(1-2(L g )) (8)
d l =2(1-2(L c l )) (9)
Step 7.6: calculating a dynamic factor kappa:
step 7.7: to sum up the above-mentioned loss functions, an objective function M is calculated:
wherein :respectively feature extractor F q Label classifier F y Global domain discriminator F g Local domain discriminator F l Parameters of (2);
step 7.8: setting a random gradient descent SGD optimizer, carrying out counter propagation gradient descent on an objective function M, and solving a feature extractor F in the objective function M q Label classifier F y Global domain discriminator F g Local domain discriminator F l Is the current optimum parameter of (a)
10. The deep learning-based cross-domain bridge damage identification method as claimed in claim 8, wherein: step 4 l 1 1, c 1 Taking 3, convolution kernel size k 1 Number of convolution kernels h =3 1 Sequentially taking 64,16,3 and step s 1 =2, the first two take-out LeakyRule functions of the activation function, and the last layer of activation function is Sigmoid function;
c 2 taking 2, convolution kernel size k 2 Number of convolution kernels h =16 2 Sequentially taking 128,64 and step s 2 =2, max pooling layer convolution kernel size k 3 =4, step size s 3 =4,l 2 Taking 2, wherein the activating function takes a LeakyRule function, and the regularization layer is a Dropout layer;
step 6 c 3 Taking 5, the convolution kernel size of the convolution layer is k 4 The value of the compound is 16,64,128,256,512, and the step length is s 4 =1, the convolution kernel size of the max pooling layer is k 5 =4, step size s 5 =2, the activation function takes the LeakyRule function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211650213.XA CN116049937A (en) | 2022-12-21 | 2022-12-21 | Cross-domain bridge damage identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211650213.XA CN116049937A (en) | 2022-12-21 | 2022-12-21 | Cross-domain bridge damage identification method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116049937A true CN116049937A (en) | 2023-05-02 |
Family
ID=86122846
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211650213.XA Pending CN116049937A (en) | 2022-12-21 | 2022-12-21 | Cross-domain bridge damage identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116049937A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116611302A (en) * | 2023-07-18 | 2023-08-18 | 成都理工大学 | Bridge check coefficient prediction method considering vehicle-mounted randomness effect |
CN117456309A (en) * | 2023-12-20 | 2024-01-26 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Cross-domain target identification method based on intermediate domain guidance and metric learning constraint |
-
2022
- 2022-12-21 CN CN202211650213.XA patent/CN116049937A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116611302A (en) * | 2023-07-18 | 2023-08-18 | 成都理工大学 | Bridge check coefficient prediction method considering vehicle-mounted randomness effect |
CN116611302B (en) * | 2023-07-18 | 2023-09-19 | 成都理工大学 | Bridge check coefficient prediction method considering vehicle-mounted randomness effect |
CN117456309A (en) * | 2023-12-20 | 2024-01-26 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Cross-domain target identification method based on intermediate domain guidance and metric learning constraint |
CN117456309B (en) * | 2023-12-20 | 2024-03-15 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Cross-domain target identification method based on intermediate domain guidance and metric learning constraint |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109711529B (en) | Cross-domain federated learning model and method based on value iterative network | |
CN116049937A (en) | Cross-domain bridge damage identification method based on deep learning | |
Abdeljaber et al. | Nonparametric structural damage detection algorithm for ambient vibration response: utilizing artificial neural networks and self-organizing maps | |
Bao et al. | One-dimensional convolutional neural network for damage detection of jacket-type offshore platforms | |
US11709979B1 (en) | Bridge damage identification method considering uncertainty | |
Sony et al. | Multiclass damage identification in a full-scale bridge using optimally tuned one-dimensional convolutional neural network | |
CN110059439B (en) | Spacecraft orbit determination method based on data driving | |
Sinha et al. | Estimating ocean surface currents with machine learning | |
CN109885916B (en) | Mixed test online model updating method based on LSSVM | |
Kaveh et al. | An efficient two‐stage method for optimal sensor placement using graph‐theoretical partitioning and evolutionary algorithms | |
CN105335619A (en) | Collaborative optimization method applicable to parameter back analysis of high calculation cost numerical calculation model | |
CN117201122B (en) | Unsupervised attribute network anomaly detection method and system based on view level graph comparison learning | |
CN117454124B (en) | Ship motion prediction method and system based on deep learning | |
Ku et al. | A study of the Lamarckian evolution of recurrent neural networks | |
Yang et al. | Bridge health anomaly detection using deep support vector data description | |
CN108009635A (en) | A kind of depth convolutional calculation model for supporting incremental update | |
Wu et al. | Connections between classical car following models and artificial neural networks | |
CN110263808B (en) | Image emotion classification method based on LSTM network and attention mechanism | |
CN114290339B (en) | Robot realistic migration method based on reinforcement learning and residual modeling | |
Zhou et al. | Surrogate-assisted cooperative co-evolutionary reservoir architecture search for liquid state machines | |
Kundu et al. | Deep learning-based metamodeling technique for nonlinear seismic response quantification | |
CN116124787A (en) | Autonomous multi-angle joint detection method for appearance of non-lambertian body | |
Faqih et al. | Mackey-Glass chaotic time series prediction using modified RBF neural networks | |
CN112766464B (en) | Flexible dynamic network link prediction method, system and application based on space-time aggregation | |
Zhengfeng | Accurate recognition method of continuous sports action based on deep learning algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |