CN116049937A - Cross-domain bridge damage identification method based on deep learning - Google Patents

Cross-domain bridge damage identification method based on deep learning Download PDF

Info

Publication number
CN116049937A
CN116049937A CN202211650213.XA CN202211650213A CN116049937A CN 116049937 A CN116049937 A CN 116049937A CN 202211650213 A CN202211650213 A CN 202211650213A CN 116049937 A CN116049937 A CN 116049937A
Authority
CN
China
Prior art keywords
bridge
domain
layer
finite element
element model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211650213.XA
Other languages
Chinese (zh)
Inventor
贺文宇
李志东
户东阳
李祎琳
李怡帆
张静
胡志祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202211650213.XA priority Critical patent/CN116049937A/en
Publication of CN116049937A publication Critical patent/CN116049937A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/23Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Structural Engineering (AREA)
  • Civil Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Architecture (AREA)
  • Pure & Applied Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a cross-domain bridge damage identification method based on deep learning, which comprises the following steps: building a vehicle-bridge finite element model-simulating a bridge real structure by adding uncertainty in the bridge finite element model in the vehicle-bridge finite element model-data preprocessing-building a generative countermeasure neural network-training and generating phase-building a dynamic domain countermeasure adaptive network-training phase-utilizing a trained feature extractor F q Sum tag classifierF y For a target domain dataset
Figure DDA0004010156100000011
And detecting to obtain a corresponding detection result. By adopting the cross-domain bridge damage identification method based on deep learning, the simulation expansion of bridge displacement response signals generated by the two-axle vehicle when the bridge runs at a constant speed can be realized through the deep generation type countermeasure network, and then the source domain and the target domain features are projected to the same feature space through the dynamic countermeasure adaption network, so that the cross-domain damage feature extraction and adaption are realized, and the bridge damage identification under the cross-domain condition is completed.

Description

Cross-domain bridge damage identification method based on deep learning
Technical Field
The invention relates to a bridge detection technology, in particular to a cross-domain bridge damage identification method based on deep learning.
Background
Since deep learning techniques have powerful and efficient capabilities for learning and predicting large data, data-driven data mining techniques have been developed for structural damage detection in the field of structural health monitoring. Namely, the structural response is used as the input of feature mining, and deep learning can mine damage sensitive features from massive data without specific structural information and is more effective than many traditional methods.
However, the data-driven deep learning approach is far from fully developed in terms of structural damage detection, and its main challenge is the lack of damage data from actual structural markers, as the structural conditions are not known in advance, and some researchers build finite element models of real structures to generate marked damage data that can take into account all possible damage scenarios for network training. However, the finite element model is affected by the real environment and parameters such as boundary conditions, and is difficult to determine and model.
When a deep learning model trained based on a finite element model is applied to an actual structure, differences between the finite element model and the actual structure may cause performance degradation.
For this reason, the prior art developed an unsupervised domain adaptation aimed at handling the data distribution differences between the source domain and the target domain. It can learn knowledge from the labeled source domain and apply intelligently to unlabeled target domains. However, another challenge with unsupervised domain adaptation is: the target domain is required and also a large amount of data is required. In reality, a large amount of data cannot be provided, which is a big bottleneck that the self-adaption of the unsupervised field cannot be applied to the actual scene.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a cross-domain bridge damage identification method based on deep learning, which is based on dynamic field self-adaption after data generalization, so that the problem that a large amount of damage data of a target domain cannot be provided by an actual scene in bridge detection is solved, knowledge of a source domain can be dynamically migrated to the target domain, and the problem of gap between a bridge finite element model and a bridge actual structure due to environment or modeling errors is solved, thereby obtaining a damage identification model which has good identification precision and can be applied to the actual damage identification model.
In order to achieve the above purpose, the invention provides a cross-domain bridge damage identification method based on deep learning, which comprises the following steps:
step 1: establishing a vehicle-bridge finite element model;
step 2: simulating a bridge real structure by adding uncertainty in a bridge finite element model in a vehicle-bridge finite element model;
step 3: data preprocessing
For the obtained source domain data
Figure SMS_1
And the target Domain->
Figure SMS_2
Carrying out normalization and interpolation processing, keeping all sample space dimensions consistent, and obtaining processed source domain data +.>
Figure SMS_3
And target Domain data->
Figure SMS_4
Step 4: build by generator G θ And criticizing device D ω A constitutive, generative antagonistic neural network;
step 5: training and generating;
step 6: construction by feature extractor F q Tag predictor F y Global domain discriminator F g And a local domain discriminator F l The dynamic domain composed resists the adaptive network;
step 7: training;
step 8: obtaining optimal parameters by training
Figure SMS_5
Feature extractor F of (1) q And tag classifier F y For the target domain dataset->
Figure SMS_6
And detecting to obtain a corresponding detection result.
Preferably, the step 1 specifically includes the following steps:
step 1.1: determining bridge parameters, constructing a bridge finite element model by utilizing the bridge parameters, and sequentially numbering [1,2,3, …, C ] after dividing units of the bridge finite element model;
step 1.2: determining two-axis vehicle parameters, constructing a vehicle finite element model according to the two-axis vehicle parameters, and setting an uncertainty vehicle weight m v =m v0 +a 0 ×sin(t 0 ) And uncertain vehicle speed v=v 0 +a 1 ×sin(t 1), wherein a0 、a 1 To change the amplitude, t 0 ,t 1 ∈[0,2π];
Step 1.3: in the nondestructive state of the bridge, the vehicle has an uncertain vehicle weight m v And the uncertain speed v passes through the bridge, and the displacement response of the bridge is calculated and obtained by using a Newmark-beta method
Figure SMS_7
Repeating step 1.3n times to obtain sample set +.>
Figure SMS_8
And construct tag->
Figure SMS_9
wherein />
Figure SMS_10
Step 1.4: by compromising the cell stiffness delta E divided in the bridge finite element model 0 ×I 0 Simulating bridge damage, wherein delta is a reduction coefficient;
stiffness reduction is carried out on a No. 1 unit divided by the bridge, and the method for acquiring the bridge displacement response in the step 1.3 is repeated n times to obtain a sample set
Figure SMS_11
And construct tag->
Figure SMS_12
Wherein the sample tag is the unit number of the lesion, i.e
Figure SMS_13
Step 1.5: repeating step 1.4 until all units of bridge division are completed, and finally obtaining a source domain data set
Figure SMS_14
And tag set->
Figure SMS_15
Preferably, the bridge parameters in step 1.1 include bridge moment of inertia I 0 Modulus of elasticity E 0 Density per linear meter ρ 0 And length L b
The two-axis vehicle parameters in step 1.2 include the total weight m v0 Two wheelbases d 1 、d 2 And a running speed v 0
Preferably, the moment of inertia I in step 1.1 0 = 1.3901, modulus of elasticity E 0 =3.5×10 10 pa, density per linear meter ρ 0 = 18358 and length L b The bridge finite element model is divided into units and numbered [1,2,3, …,10 ] in sequence =25m and the dividing unit number C=10];
The amplitude of change a in step 1.2 0 =50、a 1 =0.1;
The sampling frequency in step 1.3 is 500Hz;
the reduction coefficient δ=0.75 in step 1.4.
Preferably, the step 2 specifically includes the following steps:
step 2.1: five uncertainties are added:
(1) Simulating the influence of temperature on the bridge by changing the elastic modulus of the bridge finite element model, i.e. E' =e 0 ×(1+ζ 1), wherein ζ1 ∈(-0.05,0.05);
(2) Vertical stiffness E of boundary nodes by setting bridge finite element model v And corner stiffness E r Simulating the boundary condition form of the bridge elastic support;
(3) By varying the moment of inertia I of the bridge finite element model 0 To simulate the geometric error of bridge modeling, i.e., I' =i 0 ×(1+ζ 2), wherein ζ2 Epsilon (-0.03,0.03); at the same time by changing the density rho of the bridge finite element model 0 To simulate the material errors of bridge modeling, i.e., ρ' =ρ 0 ×(1+ζ 3), wherein ζ3 ∈(-0.02,0.02);
(4) Grade A of road surface roughness by setting bridge finite element model 0 Simulating the roughness of the bridge deck;
and (3) performing the method in the step (1.3) by adding the four uncertain bridge finite element models to obtain bridge displacement response
Figure SMS_16
And adding in the obtained bridge displacement response: (5) Obeying the mean value to be 0 and the variance to be sigma 2 Gaussian distribution noise gamma-N (0, sigma) 2 ) Obtain->
Figure SMS_17
Step 2.2: repeating the steps 1.3 to 1.4 to obtain a displacement response sample set of bridge damage and each unit damage
Figure SMS_18
Finally obtain the target domain dataset +.>
Figure SMS_19
Preferably, in step 4:
the generator G θ Comprising in order l 1 Linear layers c 1 A plurality of transposed convolution layers, each linear layer and each transposed convolution layer except the last layer are added with a linear activation layer and a normalization layer, wherein the convolution kernel of the transposed convolution layer has a size of k 1 The number of convolution kernels is h 1 Step length s 1
The criticizing device D ω Comprising c in order 2 Each convolution layer/ 2 Each linear layer is added with an activation layer, a regularization layer and a maximum pooling layer except the last linear layer, wherein the convolution kernel of the convolution layer is k 2 The number of convolution kernels is h 2 Step length s 2 The convolution kernel size of the max pooling layer is k 3 Step length s 3
Preferably, the step 5 specifically includes the following steps:
step 5.1: training phase
Generating input to an antagonism network as target domain data
Figure SMS_20
And randomly generating a noise vector z conforming to the Gaussian distribution, and inputting the noise vector z to the generator G θ The resulting sample +.>
Figure SMS_21
Step 5.2: calculating interpolation data
Figure SMS_22
Figure SMS_23
Wherein epsilon obeys uniform distribution U0, 1]Target domain samples
Figure SMS_24
Step 5.3: sample x, interpolation sample of target field
Figure SMS_25
And generate sample->
Figure SMS_26
Input criticizing device D ω Calculating a loss function L:
Figure SMS_27
wherein lambda is the penalty weight,
Figure SMS_28
2-norm after differentiation for the criticizer;
step 5.4: counter-propagating gradient descent of the loss function L by using an adaptive optimizer Adam, and solving the criticizer D in the loss function ω Is the current optimum parameter of (a)
Figure SMS_29
Step 5.5: randomly generating a batch of noise sets containing p Gaussian-distribution-compliant noise sets
Figure SMS_30
Calculating a loss function F:
Figure SMS_31
step 5.6: loss function using adaptive optimizer AdamCounter-propagating gradient descent of the number F, solving the generator G in the loss function θ Is the current optimum parameter of (a)
Figure SMS_32
Step 5.7: repeating the steps 5.3-5.6 until the criticizing device D is obtained when the loss functions L and F are converged to the optimal value ω Sum generator G θ Optimum parameters of (a)
Figure SMS_33
and />
Figure SMS_34
Step 5.8: stage of generation
Randomly generating n Gaussian-distributed noise
Figure SMS_35
Input already trained generator->
Figure SMS_36
In (2) obtaining an extended target domain sample set +.>
Figure SMS_37
Step 5.9: repeating steps 5.1-5.8, and the target domain data
Figure SMS_38
As input, a trained generator for each category is obtained>
Figure SMS_39
Expansion data set of each category +.>
Figure SMS_40
Finally, an extended data set +.>
Figure SMS_41
Preferably, in step 6:
the feature extractor F q Sequentially packInclude c 3 And adding an activation layer, a regularization layer and a maximum pooling layer after each convolution layer, wherein the convolution kernel of the convolution layers is k 4 Step length s 4 The convolution kernel size of the max pooling layer is k 5 Step length s 5
The tag predictor F y Comprising in order l 3 A plurality of linear layers;
the global domain discriminator F g Comprising in order l 4 A plurality of linear layers, two linear layers and an intermediate additive activation layer;
the local domain discriminator F l Comprising in order l 5 The linear layers, the two linear layers and the middle adding active layer.
Preferably, the step 7 specifically includes the following steps:
step 7.1: from a source domain dataset
Figure SMS_42
Randomly extracting a batch of n-containing s Sample->
Figure SMS_43
Sequentially through feature extractor F q And tag classifier F y Calculating a tag loss function L y
Figure SMS_44
wherein ,
Figure SMS_45
is->
Figure SMS_46
Is a label of (2);
step 7.2: setting domain labels
The source domain data domain tag is set to
Figure SMS_47
The extended target domain data domain tag is set to +.>
Figure SMS_48
Step 7.3: respectively from source domain data sets
Figure SMS_49
Randomly extracting a batch of n-containing s Sample->
Figure SMS_50
And +.>
Figure SMS_51
Randomly extracting a batch of n-containing t Sample->
Figure SMS_52
Sequentially through feature extractor F q And a global domain discriminator F g Computing a global domain loss function L g
Figure SMS_53
wherein ,Ld As a function of the cross-entropy,
Figure SMS_54
d i is x i Is a domain label of (2);
step 7.4: respectively from source domain data sets
Figure SMS_55
Randomly extracting a batch of n-containing s Sample->
Figure SMS_56
And +.>
Figure SMS_57
Randomly extracting a batch of n-containing t Sample->
Figure SMS_58
Sequentially through feature extractor F q And a local domain discriminator F l Parallel warp type (6)And equation (7) calculate the local loss function for each class +.>
Figure SMS_59
And a local area total loss function L c :
Figure SMS_60
Figure SMS_61
wherein ,
Figure SMS_62
local domain discriminator for category c, < >>
Figure SMS_63
Cross entropy function for category c; />
Step 7.5: calculating the A-distance of the global domain discriminator and the local domain discriminator using the formulas (8) and (9), respectively, to obtain d g 、d l
d g =2(1-2(L g )) (8)
Figure SMS_64
Step 7.6: calculating a dynamic factor kappa:
Figure SMS_65
step 7.7: to sum up the above-mentioned loss functions, an objective function M is calculated:
Figure SMS_66
wherein :θqyg ,
Figure SMS_67
Respectively feature extractor F q Label classifier F y Global domain discriminator F g Local domain discriminator F l Parameters of (2);
step 7.8: setting a random gradient descent SGD optimizer, carrying out counter propagation gradient descent on an objective function M, and solving a feature extractor F in the objective function M q Label classifier F y Global domain discriminator F g Local domain discriminator F l Is the current optimum parameter of (a)
Figure SMS_68
Step 7.9: repeating steps 7.1-7.8 until the feature extractor F is obtained when the objective function M converges to an optimum q Label classifier F y Global domain discriminator F g Local domain discriminator F l Optimum parameters of (a)
Figure SMS_69
Preferably, in step 4 l 1 1, c 1 Taking 3, convolution kernel size k 1 Number of convolution kernels h =3 1 Sequentially taking 64,16,3 and step s 1 =2, the first two take-out LeakyRule functions of the activation function, and the last layer of activation function is Sigmoid function;
c 2 taking 2, convolution kernel size k 2 Number of convolution kernels h =16 2 Sequentially taking 128,64 and step s 2 =2, max pooling layer convolution kernel size k 3 =4, step size s 3 =4,l 2 Taking 2, wherein the activating function takes a LeakyRule function, and the regularization layer is a Dropout layer;
step 6 c 3 Taking 5, the convolution kernel size of the convolution layer is k 4 The value of the compound is 16,64,128,256,512, and the step length is s 4 =1, the convolution kernel size of the max pooling layer is k 5 =4, step size s 5 =2, the activation function takes the LeakyRule function.
Therefore, the method for identifying the damage of the cross-domain bridge based on deep learning has the following beneficial effects:
1. the generated type countermeasure network generated sample has the advantages of diversity and difficulty in becoming noise, and is identical and distributed with the dimension of the target sample, so that the defect that a large amount of data is needed for providing the deep learning network in a real scene is overcome.
2. The adopted dynamic domain adaptation network can dynamically realize the joint distribution of Ji Yuanyu and target domain data, and can adapt to each application scene of bridge damage identification in reality.
3. A large number of sensors are not required to be arranged to obtain a plurality of position responses, and damage characteristics of the bridge can be obtained only by a small number of position responses, so that the cost of bridge damage detection is greatly reduced.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a finite element model diagram of a vehicle-bridge of the present invention;
FIG. 3 is a diagram of a generated countermeasure network according to the present invention;
FIG. 4 is a T-SNE diagram of the generated data and the target domain data of the present invention;
FIG. 5 is a diagram of a dynamic domain counter-adaptive network according to the present invention;
fig. 6 is a diagram of the bridge damage detection result of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that, while the present embodiment provides a detailed implementation and a specific operation process on the premise of the present technical solution, the protection scope of the present invention is not limited to the present embodiment.
The cross-domain bridge damage identification method based on deep learning comprises the following steps:
step 1: establishing a vehicle-bridge finite element model;
preferably, the step 1 specifically includes the following steps:
step 1.1: determining bridge parameters, constructing a bridge finite element model by utilizing the bridge parameters, and sequentially numbering [1,2,3, …, C ] after dividing units of the bridge finite element model;
preferably, the bridge parameters in step 1.1 include bridge moment of inertia I 0 Modulus of elasticity E 0 Density per linear meter ρ 0 And length L b
Preferably, the moment of inertia I in step 1.1 0 = 1.3901, modulus of elasticity E 0 =3.5×10 10 pa, density per linear meter ρ 0 = 18358 and length L b The bridge finite element model is divided into units and numbered [1,2,3, …,10 ] in sequence =25m and the dividing unit number C=10];
Step 1.2: determining two-axis vehicle parameters, constructing a vehicle finite element model according to the two-axis vehicle parameters, and setting an uncertainty vehicle weight m v =m v0 +a 0 ×sin(t 0 ) And uncertain vehicle speed v=v 0 +a 1 ×sin(t 1), wherein a0 、a 1 To change the amplitude, t 0 ,t 1 ∈[0,2π]The method comprises the steps of carrying out a first treatment on the surface of the The amplitude of change a in step 1.2 0 =50、a 1 =0.1;
The two-axis vehicle parameters in step 1.2 include the total weight m v0 Two wheelbases d 1 、d 2 And a running speed v 0
Total weight m in this example v0 =18000 kg, two wheelbases d 1 =1.95m、d 2 =1.05m and travel speed v 0 =10m/s,
Step 1.3: in the nondestructive state of the bridge, the vehicle has an uncertain vehicle weight m v And the uncertain speed v passes through the bridge, and the displacement response of the bridge is calculated and obtained by using a Newmark-beta method
Figure SMS_70
The invention extracts the displacement response of the 1,5,9 unit nodes as a sample, and repeats the steps 1.3n times (n=1250 in the embodiment) to obtain a sample set ∈ ->
Figure SMS_71
And constructing the tag
Figure SMS_72
wherein />
Figure SMS_73
The sampling frequency in step 1.3 is 500Hz;
step 1.4: by compromising the cell stiffness delta E divided in the bridge finite element model 0 ×I 0 Simulating bridge damage, wherein delta is a reduction coefficient; the reduction coefficient δ=0.75 in step 1.4.
Stiffness reduction is carried out on a number 1 unit divided by the bridge, and the method for obtaining the bridge displacement response in the step 1.3 is repeated n times (n=1250 in the embodiment) to obtain a sample set
Figure SMS_74
And construct tag->
Figure SMS_75
Wherein the sample tag is the injured unit number, i.e. +.>
Figure SMS_76
Step 1.5: repeating step 1.4 until all units of bridge division are completed, and finally obtaining a source domain data set
Figure SMS_77
And tag set->
Figure SMS_78
Step 2: simulating a bridge real structure by adding uncertainty in a bridge finite element model in a vehicle-bridge finite element model;
preferably, the step 2 specifically includes the following steps:
step 2.1: five uncertainties are added:
(1) Simulating the influence of temperature on the bridge by changing the elastic modulus of the bridge finite element model, i.e. E' =e 0 ×(1+ζ 1), wherein ζ1 Epsilon (-0.05,0.05); zeta in this example 1 =0.04;
(2) Vertical stiffness E of boundary nodes by setting bridge finite element model v And corner stiffness E r Simulating the boundary condition form of the bridge elastic support; vertical stiffness E of boundary node in this embodiment v =1.95×10 11 N/m and corner stiffness E r =1800N□m;
(3) By varying the moment of inertia I of the bridge finite element model 0 To simulate the geometric error of bridge modeling, i.e., I' =i 0 ×(1+ζ 2), wherein ζ2 E (-0.03,0.03), ζ in this example 2 -0.09; at the same time by changing the density rho of the bridge finite element model 0 To simulate the material errors of bridge modeling, i.e., ρ' =ρ 0 ×(1+ζ 3), wherein ζ3 E (-0.02,0.02), ζ in this example 3 =0.02;
(4) Grade A of road surface roughness by setting bridge finite element model 0 Simulating the roughness of the bridge deck; grade a of road surface roughness in this embodiment 0 =16
And (3) performing the method in the step (1.3) by adding the four uncertain bridge finite element models to obtain bridge displacement response
Figure SMS_79
And adding in the obtained bridge displacement response: (5) Obeying the mean value to be 0 and the variance to be sigma 2 Gaussian distribution noise y ≡n (0, σ) 2 ) Obtain->
Figure SMS_80
Step 2.2: repeating the steps 1.3 to 1.4 to obtain a displacement response sample set of bridge damage and each unit damage
Figure SMS_81
Finally obtain the target domain dataset +.>
Figure SMS_82
Since only a small number of samples exist in reality, the target domain sample number m=250 in the present embodiment
Step 3: data preprocessing
For the obtained source domain data
Figure SMS_83
And the target Domain->
Figure SMS_84
Carrying out normalization and interpolation processing, keeping all sample space dimensions consistent, and obtaining processed source domain data +.>
Figure SMS_85
And target Domain data->
Figure SMS_86
Step 4: build by generator G θ And criticizing device D ω A constitutive, generative antagonistic neural network;
preferably, in step 4:
the generator G θ Comprising in order l 1 Linear layers c 1 A plurality of transposed convolution layers, each linear layer and each transposed convolution layer except the last layer are added with a linear activation layer and a normalization layer, wherein the convolution kernel of the transposed convolution layer has a size of k 1 The number of convolution kernels is h 1 Step length s 1
Preferably, in step 4 l 1 1, c 1 Taking 3, convolution kernel size k 1 Number of convolution kernels h =3 1 Sequentially taking 64,16,3 and step s 1 =2, the first two take-out LeakyRule functions of the activation function, and the last layer of activation function is Sigmoid function;
the criticizing device D ω Comprising c in order 2 Each convolution layer/ 2 Each linear layer is added with an activation layer, a regularization layer and a maximum pooling layer except the last linear layer, wherein the convolution kernel of the convolution layer is k 2 The number of convolution kernels is h 2 Step length s 2 The convolution kernel size of the max pooling layer is k 3 Step size ofs 3
c 2 Taking 2, convolution kernel size k 2 Number of convolution kernels h =16 2 Sequentially taking 128,64 and step s 2 =2, max pooling layer convolution kernel size k 3 =4, step size s 3 =4,l 2 Taking 2, wherein the activating function takes a LeakyRule function, and the regularization layer is a Dropout layer;
step 5: training and generating;
preferably, the step 5 specifically includes the following steps:
step 5.1: training phase
Generating input to an antagonism network as target domain data
Figure SMS_87
And randomly generating a noise vector z subject to a gaussian distribution, the length of the noise vector in this embodiment being z=400, the gaussian distribution subject to a mean value being 0, the variance being 1, and inputting the noise vector z to the generator G θ The resulting sample +.>
Figure SMS_88
Step 5.2: calculating interpolation data
Figure SMS_89
Figure SMS_90
Wherein epsilon obeys uniform distribution U0, 1]Target domain samples
Figure SMS_91
Step 5.3: sample x, interpolation sample of target field
Figure SMS_92
And generate sample->
Figure SMS_93
Input criticizing device D ω Calculating a loss function L:
Figure SMS_94
wherein lambda is the penalty weight,
Figure SMS_95
2-norm after differentiation for the criticizer;
step 5.4: counter-propagating gradient descent of the loss function L by using an adaptive optimizer Adam, and solving the criticizer D in the loss function ω Is the current optimum parameter of (a)
Figure SMS_96
Step 5.5: randomly generating a batch of noise sets containing p Gaussian-distribution-compliant noise sets
Figure SMS_97
In this embodiment, p=16, and the loss function F is calculated:
Figure SMS_98
step 5.6: counter-propagating gradient descent of a loss function F using an adaptive optimizer Adam, solving a generator G in the loss function θ Is the current optimum parameter of (a)
Figure SMS_99
The optimizer learning rate in this embodiment is set to 0.0001;
step 5.7: repeating the steps 5.3-5.6 until the criticizing device D is obtained when the loss functions L and F are converged to the optimal value ω Sum generator G θ Optimum parameters of (a)
Figure SMS_100
and />
Figure SMS_101
Step 5.8: stage of generation
Randomly generating n subject gaussianDistributed noise
Figure SMS_102
Input already trained generator->
Figure SMS_103
In (2) obtaining an extended target domain sample set +.>
Figure SMS_104
Generating the same number of samples as the source domain data in this example;
step 5.9: repeating steps 5.1-5.8, and the target domain data
Figure SMS_105
As input, a trained generator for each category is obtained>
Figure SMS_106
Expansion data set of each category +.>
Figure SMS_107
Finally, an extended data set +.>
Figure SMS_108
Step 6: construction by feature extractor F q Tag predictor F y Global domain discriminator F g And a local domain discriminator F l The dynamic domain composed resists the adaptive network;
preferably, in step 6:
the feature extractor F q Comprising c in order 3 And adding an activation layer, a regularization layer and a maximum pooling layer after each convolution layer, wherein the convolution kernel of the convolution layers is k 4 Step length s 4 The convolution kernel size of the max pooling layer is k 5 Step length s 5
Step 6 c 3 Taking 5, the convolution kernel size of the convolution layer is k 4 The value of the compound is 16,64,128,256,512, and the step length is s 4 =1, the convolution kernel size of the max pooling layer is k 5 =4, step size s 5 =2, the activation function takes the LeakyRule function.
The tag predictor F y Comprising in order l 3 Linear layers, in this example l 3 =2;
The global domain discriminator F g Comprising in order l 4 A plurality of linear layers, two linear layers and an intermediate additive activation layer, in this example l 4 =2, the activation function takes a LeakyRule function;
the local domain discriminator F l Comprising in order l 5 A plurality of linear layers, two linear layers and an intermediate additive activation layer, in this example l 5 =2, the activation function takes the LeakyRule function.
Step 7: training;
preferably, the step 7 specifically includes the following steps:
step 7.1: from a source domain dataset
Figure SMS_109
Randomly extracting a batch of n-containing s Sample->
Figure SMS_110
Sequentially through feature extractor F q And tag classifier F y Calculating a tag loss function L y
Figure SMS_111
wherein ,
Figure SMS_112
is->
Figure SMS_113
Is a label of (2);
step 7.2: setting domain labels
The source domain data domain tag is set to
Figure SMS_114
The extended target domain data domain tag is set to +.>
Figure SMS_115
Step 7.3: respectively from source domain data sets
Figure SMS_116
Randomly extracting a batch of n-containing s Sample->
Figure SMS_117
And +.>
Figure SMS_118
Randomly extracting a batch of n-containing t Sample->
Figure SMS_119
Sequentially through feature extractor F q And a global domain discriminator F g Computing a global domain loss function L g
Figure SMS_120
wherein ,Ld As a function of the cross-entropy,
Figure SMS_121
d i is x i Is a domain label of (2);
step 7.4: respectively from source domain data sets
Figure SMS_122
Randomly extracting a batch of n-containing s Sample->
Figure SMS_123
And +.>
Figure SMS_124
Randomly extracting a batch of n-containing t Sample->
Figure SMS_125
N in the present embodiment t =32, sequentially through feature extractor F q And a local domain discriminator F l And calculating the local loss function of each class by the equation (6) and the equation (7)>
Figure SMS_126
And a local area total loss function L c :
Figure SMS_127
Figure SMS_128
wherein ,
Figure SMS_129
local domain discriminator for category c, < >>
Figure SMS_130
Cross entropy function for category c;
step 7.5: calculating the A-distance of the global domain discriminator and the local domain discriminator using the formulas (8) and (9), respectively, to obtain d g 、d l
d g =2(1-2(L g )) (8)
Figure SMS_131
Step 7.6: calculating a dynamic factor kappa:
Figure SMS_132
step 7.7: to sum up the above-mentioned loss functions, an objective function M is calculated:
Figure SMS_133
wherein :θqyg ,
Figure SMS_134
Respectively feature extractor F q Label classifier F y Global domain discriminator F g Local domain discriminator F l Parameters of (2);
step 7.8: setting a random gradient descent SGD optimizer, in this embodiment, setting the learning rate of the SGD optimizer to be 0.0001 and the momentum to be 0.9, performing counter-propagation gradient descent on the objective function M, and solving a feature extractor F in the objective function M q Label classifier F y Global domain discriminator F g Local domain discriminator F l Is the current optimum parameter of (a)
Figure SMS_135
Step 7.9: repeating steps 7.1-7.8 until the feature extractor F is obtained when the objective function M converges to an optimum q Label classifier F y Global domain discriminator F g Local domain discriminator F l Optimum parameters of (a)
Figure SMS_136
Step 8: obtaining optimal parameters by training
Figure SMS_137
Feature extractor F of (1) q And tag classifier F y For the target domain dataset->
Figure SMS_138
The detection was performed to obtain a corresponding detection result, and it should be noted that the final result was represented by a confusion matrix as shown in fig. 6, and the average accuracy of the detection result was found to be 83.60%.
Therefore, the method for identifying the cross-domain bridge damage based on the deep learning can perform data expansion on the target domain data to generate a large amount of pseudo data which is similar to the target domain data and distributed in the same way, thereby providing the required target domain data for the self-adaptive method in the unsupervised domain, and then participating in learning, and providing a foundation for self-adaptive application of the unsupervised domain to actual scenes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (10)

1. A cross-domain bridge damage identification method based on deep learning is characterized by comprising the following steps of: the method comprises the following steps:
step 1: establishing a vehicle-bridge finite element model;
step 2: simulating a bridge real structure by adding uncertainty in a bridge finite element model in a vehicle-bridge finite element model;
step 3: data preprocessing
For the obtained source domain data
Figure QLYQS_1
And the target Domain->
Figure QLYQS_2
Carrying out normalization and interpolation processing, keeping all sample space dimensions consistent, and obtaining processed source domain data +.>
Figure QLYQS_3
And target Domain data->
Figure QLYQS_4
Step 4: build by generator G θ And criticizing device D ω A constitutive, generative antagonistic neural network;
step 5: training and generating;
step 6: construction by feature extractor F q Tag predictor F y Global domain discriminator F g And a local domain discriminator F l The dynamic domain composed resists the adaptive network;
step 7: training;
step 8: obtaining optimal parameters by training
Figure QLYQS_5
Feature extractor F of (1) q And tag classifier F y For the target domain dataset->
Figure QLYQS_6
And detecting to obtain a corresponding detection result.
2. The deep learning-based cross-domain bridge damage identification method as claimed in claim 1, wherein: the step 1 specifically comprises the following steps:
step 1.1: determining bridge parameters, constructing a bridge finite element model by utilizing the bridge parameters, and sequentially numbering [1,2,3, …, C ] after dividing units of the bridge finite element model;
step 1.2: determining two-axis vehicle parameters, constructing a vehicle finite element model according to the two-axis vehicle parameters, and setting an uncertainty vehicle weight m v =m v0 +a 0 ×sin(t 0 ) And uncertain vehicle speed v=v 0 +a 1 ×sin(t 1), wherein a0 、a 1 To change the amplitude, t 0 ,t 1 ∈[0,2π];
Step 1.3: in the nondestructive state of the bridge, the vehicle has an uncertain vehicle weight m v And the uncertain speed v passes through the bridge, and the displacement response of the bridge is calculated and obtained by using a Newmark-beta method
Figure QLYQS_7
Repeating step 1.3n times to obtain sample set +.>
Figure QLYQS_8
And construct tag->
Figure QLYQS_9
wherein />
Figure QLYQS_10
Step 1.4: by compromising the cell stiffness delta E divided in the bridge finite element model 0 ×I 0 Simulating bridge damage, wherein delta is a reduction coefficient;
stiffness reduction is carried out on a No. 1 unit divided by the bridge, and the method for acquiring the bridge displacement response in the step 1.3 is repeated n times to obtain a sample set
Figure QLYQS_11
And construct tag->
Figure QLYQS_12
Wherein the sample tag is the injured unit number, i.e. +.>
Figure QLYQS_13
Step 1.5: repeating step 1.4 until all units of bridge division are completed, and finally obtaining a source domain data set
Figure QLYQS_14
And tag set->
Figure QLYQS_15
3. The deep learning-based cross-domain bridge damage identification method as claimed in claim 2, wherein: the bridge parameters in step 1.1 include the bridge moment of inertia I 0 Modulus of elasticity E 0 Density per linear meter ρ 0 And length L b
The two-axis vehicle parameters in step 1.2 include the total weight m v0 Two wheelbases d 1 、d 2 And a running speed v 0
4. A method according to claim 3The cross-domain bridge damage identification method based on deep learning is characterized by comprising the following steps of: moment of inertia I in step 1.1 0 = 1.3901, modulus of elasticity E 0 =3.5×10 10 pa, density per linear meter ρ 0 = 18358 and length L b The bridge finite element model is divided into units and numbered [1,2,3, …,10 ] in sequence =25m and the dividing unit number C=10];
The amplitude of change a in step 1.2 0 =50、a 1 =0.1;
The sampling frequency in step 1.3 is 500Hz;
the reduction coefficient δ=0.75 in step 1.4.
5. The deep learning-based cross-domain bridge damage identification method as claimed in claim 4, wherein: the step 2 specifically comprises the following steps:
step 2.1: five uncertainties are added:
(1) Simulating the influence of temperature on the bridge by changing the elastic modulus of the bridge finite element model, i.e. E' =e 0 ×(1+ζ 1), wherein ζ1 ∈(-0.05,0.05);
(2) Vertical stiffness E of boundary nodes by setting bridge finite element model v And corner stiffness E r Simulating the boundary condition form of the bridge elastic support;
(3) By varying the moment of inertia I of the bridge finite element model 0 To simulate the geometric error of bridge modeling, i.e., I' =i 0 ×(1+ζ 2), wherein ζ2 Epsilon (-0.03,0.03); at the same time by changing the density rho of the bridge finite element model 0 To simulate the material errors of bridge modeling, i.e., ρ' =ρ 0 ×(1+ζ 3), wherein ζ3 ∈(-0.02,0.02);
(4) Grade A of road surface roughness by setting bridge finite element model 0 Simulating the roughness of the bridge deck;
and (3) performing the method in the step (1.3) by adding the four uncertain bridge finite element models to obtain bridge displacement response
Figure QLYQS_16
And adding in the obtained bridge displacement response: (5) Obeying the mean value to be 0 and the variance to be sigma 2 Gaussian distribution noise gamma-N (0, sigma) 2 ) Obtain->
Figure QLYQS_17
Step 2.2: repeating the steps 1.3 to 1.4 to obtain a displacement response sample set of bridge damage and each unit damage
Figure QLYQS_18
Finally obtain the target domain dataset +.>
Figure QLYQS_19
6. The deep learning-based cross-domain bridge damage identification method as claimed in claim 5, wherein: in step 4:
the generator G θ Comprising in order l 1 Linear layers c 1 A plurality of transposed convolution layers, each linear layer and each transposed convolution layer except the last layer are added with a linear activation layer and a normalization layer, wherein the convolution kernel of the transposed convolution layer has a size of k 1 The number of convolution kernels is h 1 Step length s 1
The criticizing device D ω Comprising c in order 2 Each convolution layer/ 2 Each linear layer is added with an activation layer, a regularization layer and a maximum pooling layer except the last linear layer, wherein the convolution kernel of the convolution layer is k 2 The number of convolution kernels is h 2 Step length s 2 The convolution kernel size of the max pooling layer is k 3 Step length s 3
7. The deep learning-based cross-domain bridge damage identification method as claimed in claim 6, wherein: the step 5 specifically comprises the following steps:
step 5.1: training phase
Generating input to an antagonism network as target domain data
Figure QLYQS_20
And randomly generating a noise vector z conforming to the Gaussian distribution, and inputting the noise vector z to the generator G θ The resulting sample +.>
Figure QLYQS_21
Step 5.2: calculating interpolation data
Figure QLYQS_22
Figure QLYQS_23
Wherein epsilon obeys uniform distribution U0, 1]Target domain samples
Figure QLYQS_24
Step 5.3: sample x, interpolation sample of target field
Figure QLYQS_25
And generate sample->
Figure QLYQS_26
Input criticizing device D ω Calculating a loss function L: />
Figure QLYQS_27
Wherein lambda is the penalty weight,
Figure QLYQS_28
2-norm after differentiation for the criticizer;
step 5.4: loss function L using adaptive optimizer AdamCounter-propagating gradient descent, solving criticizer D in loss function ω Is the current optimum parameter of (a)
Figure QLYQS_29
Step 5.5: randomly generating a batch of noise sets containing p Gaussian-distribution-compliant noise sets
Figure QLYQS_30
Calculating a loss function F:
Figure QLYQS_31
step 5.6: counter-propagating gradient descent of a loss function F using an adaptive optimizer Adam, solving a generator G in the loss function θ Is the current optimum parameter of (a)
Figure QLYQS_32
Step 5.7: repeating the steps 5.3-5.6 until the criticizing device D is obtained when the loss functions L and F are converged to the optimal value ω Sum generator G θ Optimum parameters of (a)
Figure QLYQS_33
and />
Figure QLYQS_34
Step 5.8: stage of generation
Randomly generating n Gaussian-distributed noise
Figure QLYQS_35
Input already trained generator->
Figure QLYQS_36
In (2) obtaining an extended target domain sample set +.>
Figure QLYQS_37
Step 5.9: repeating steps 5.1-5.8, and the target domain data
Figure QLYQS_38
As input, a trained generator for each category is obtained>
Figure QLYQS_39
Expansion data set of each category +.>
Figure QLYQS_40
Finally, an extended data set +.>
Figure QLYQS_41
8. The deep learning-based cross-domain bridge damage identification method as claimed in claim 7, wherein: in step 6:
the feature extractor F q Comprising c in order 3 And adding an activation layer, a regularization layer and a maximum pooling layer after each convolution layer, wherein the convolution kernel of the convolution layers is k 4 Step length s 4 The convolution kernel size of the max pooling layer is k 5 Step length s 5
The tag predictor F y Comprising in order l 3 A plurality of linear layers;
the global domain discriminator F g Comprising in order l 4 A plurality of linear layers, two linear layers and an intermediate additive activation layer;
the local domain discriminator F l Comprising in order l 5 The linear layers, the two linear layers and the middle adding active layer.
9. The deep learning-based cross-domain bridge damage identification method as claimed in claim 8, wherein: the step 7 specifically comprises the following steps:
step 7.1: from a source domain dataset
Figure QLYQS_42
Randomly extracting a batch of n-containing s Sample->
Figure QLYQS_43
Sequentially through feature extractor F q And tag classifier F y Calculating a tag loss function L y
Figure QLYQS_44
wherein ,
Figure QLYQS_45
is->
Figure QLYQS_46
Is a label of (2);
step 7.2: setting domain labels
The source domain data domain tag is set to
Figure QLYQS_47
The extended target domain data domain tag is set to +.>
Figure QLYQS_48
Step 7.3: respectively from source domain data sets
Figure QLYQS_49
Randomly extracting a batch of n-containing s Sample->
Figure QLYQS_50
And +.>
Figure QLYQS_51
Randomly extracting a batch of n-containing t Sample->
Figure QLYQS_52
Sequentially through feature extractor F q And a global domain discriminator F g Computing a global domain loss function L g
Figure QLYQS_53
wherein ,Ld As a function of the cross-entropy,
Figure QLYQS_54
d i is x i Is a domain label of (2);
step 7.4: respectively from source domain data sets
Figure QLYQS_55
Randomly extracting a batch of n-containing s Sample->
Figure QLYQS_56
And +.>
Figure QLYQS_57
Randomly extracting a batch of n-containing t Sample->
Figure QLYQS_58
Sequentially through feature extractor F q And a local domain discriminator F l And calculating the local loss function of each class by the equation (6) and the equation (7)>
Figure QLYQS_59
And a local area total loss function L c :
Figure QLYQS_60
Figure QLYQS_61
wherein ,
Figure QLYQS_62
local domain discriminator for category c, < >>
Figure QLYQS_63
Cross entropy function for category c;
step 7.5: calculating the A-distance of the global domain discriminator and the local domain discriminator using the formulas (8) and (9), respectively, to obtain d g 、d l
d g =2(1-2(L g )) (8)
d l =2(1-2(L c l )) (9)
Step 7.6: calculating a dynamic factor kappa:
Figure QLYQS_64
step 7.7: to sum up the above-mentioned loss functions, an objective function M is calculated:
Figure QLYQS_65
wherein :
Figure QLYQS_66
respectively feature extractor F q Label classifier F y Global domain discriminator F g Local domain discriminator F l Parameters of (2);
step 7.8: setting a random gradient descent SGD optimizer, carrying out counter propagation gradient descent on an objective function M, and solving a feature extractor F in the objective function M q Label classifier F y Global domain discriminator F g Local domain discriminator F l Is the current optimum parameter of (a)
Figure QLYQS_67
Step 7.9: repeating steps 7.1-7.8 until the feature extractor F is obtained when the objective function M converges to an optimum q Label classifier F y Global domain discriminator F g Local domain discriminator F l Optimum parameters of (a)
Figure QLYQS_68
10. The deep learning-based cross-domain bridge damage identification method as claimed in claim 8, wherein: step 4 l 1 1, c 1 Taking 3, convolution kernel size k 1 Number of convolution kernels h =3 1 Sequentially taking 64,16,3 and step s 1 =2, the first two take-out LeakyRule functions of the activation function, and the last layer of activation function is Sigmoid function;
c 2 taking 2, convolution kernel size k 2 Number of convolution kernels h =16 2 Sequentially taking 128,64 and step s 2 =2, max pooling layer convolution kernel size k 3 =4, step size s 3 =4,l 2 Taking 2, wherein the activating function takes a LeakyRule function, and the regularization layer is a Dropout layer;
step 6 c 3 Taking 5, the convolution kernel size of the convolution layer is k 4 The value of the compound is 16,64,128,256,512, and the step length is s 4 =1, the convolution kernel size of the max pooling layer is k 5 =4, step size s 5 =2, the activation function takes the LeakyRule function.
CN202211650213.XA 2022-12-21 2022-12-21 Cross-domain bridge damage identification method based on deep learning Pending CN116049937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211650213.XA CN116049937A (en) 2022-12-21 2022-12-21 Cross-domain bridge damage identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211650213.XA CN116049937A (en) 2022-12-21 2022-12-21 Cross-domain bridge damage identification method based on deep learning

Publications (1)

Publication Number Publication Date
CN116049937A true CN116049937A (en) 2023-05-02

Family

ID=86122846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211650213.XA Pending CN116049937A (en) 2022-12-21 2022-12-21 Cross-domain bridge damage identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN116049937A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116611302A (en) * 2023-07-18 2023-08-18 成都理工大学 Bridge check coefficient prediction method considering vehicle-mounted randomness effect
CN117456309A (en) * 2023-12-20 2024-01-26 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Cross-domain target identification method based on intermediate domain guidance and metric learning constraint

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116611302A (en) * 2023-07-18 2023-08-18 成都理工大学 Bridge check coefficient prediction method considering vehicle-mounted randomness effect
CN116611302B (en) * 2023-07-18 2023-09-19 成都理工大学 Bridge check coefficient prediction method considering vehicle-mounted randomness effect
CN117456309A (en) * 2023-12-20 2024-01-26 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Cross-domain target identification method based on intermediate domain guidance and metric learning constraint
CN117456309B (en) * 2023-12-20 2024-03-15 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Cross-domain target identification method based on intermediate domain guidance and metric learning constraint

Similar Documents

Publication Publication Date Title
CN109711529B (en) Cross-domain federated learning model and method based on value iterative network
CN116049937A (en) Cross-domain bridge damage identification method based on deep learning
Abe Neural networks and fuzzy systems: theory and applications
Bao et al. One-dimensional convolutional neural network for damage detection of jacket-type offshore platforms
CN113281048B (en) Rolling bearing fault diagnosis method and system based on relational knowledge distillation
CN111539132B (en) Dynamic load time domain identification method based on convolutional neural network
US11709979B1 (en) Bridge damage identification method considering uncertainty
CN110059439B (en) Spacecraft orbit determination method based on data driving
Sony et al. Multiclass damage identification in a full-scale bridge using optimally tuned one-dimensional convolutional neural network
CN109885916B (en) Mixed test online model updating method based on LSSVM
Sinha et al. Estimating ocean surface currents with machine learning
Kaveh et al. An efficient two‐stage method for optimal sensor placement using graph‐theoretical partitioning and evolutionary algorithms
Ku et al. A study of the Lamarckian evolution of recurrent neural networks
CN117454124A (en) Ship motion prediction method and system based on deep learning
Wu et al. Connections between classical car following models and artificial neural networks
CN110263808B (en) Image emotion classification method based on LSTM network and attention mechanism
CN114290339B (en) Robot realistic migration method based on reinforcement learning and residual modeling
CN115545155A (en) Multi-level intelligent cognitive tracking method and system, storage medium and terminal
Zhou et al. Surrogate-assisted cooperative co-evolutionary reservoir architecture search for liquid state machines
Kundu et al. Deep learning-based metamodeling technique for nonlinear seismic response quantification
Li et al. Structural health monitoring data anomaly detection by transformer enhanced densely connected neural networks
CN116124787A (en) Autonomous multi-angle joint detection method for appearance of non-lambertian body
Faqih et al. Mackey-Glass chaotic time series prediction using modified RBF neural networks
US20220012595A1 (en) Training a student neural network to mimic a mentor neural network with inputs that maximize student-to-mentor disagreement
CN108960406B (en) MEMS gyroscope random error prediction method based on BFO wavelet neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination