WO2023115596A1 - Truss stress prediction and weight lightening method based on transfer learning fusion model - Google Patents

Truss stress prediction and weight lightening method based on transfer learning fusion model Download PDF

Info

Publication number
WO2023115596A1
WO2023115596A1 PCT/CN2021/141474 CN2021141474W WO2023115596A1 WO 2023115596 A1 WO2023115596 A1 WO 2023115596A1 CN 2021141474 W CN2021141474 W CN 2021141474W WO 2023115596 A1 WO2023115596 A1 WO 2023115596A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
fidelity
truss
neural network
parameters
Prior art date
Application number
PCT/CN2021/141474
Other languages
French (fr)
Chinese (zh)
Inventor
彭翔
邵宇杰
李吉泉
姜少飞
Original Assignee
浙江工业大学台州研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江工业大学台州研究院 filed Critical 浙江工业大学台州研究院
Publication of WO2023115596A1 publication Critical patent/WO2023115596A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/23Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the invention relates to a finite element processing method for a truss, in particular to a method for predicting target parameters of a truss node and lightweight design based on a transfer learning fusion model.
  • the truss structure is a high-performance structure that facilitates structural design. Due to the special advantages of the truss structure, it is widely used in high-speed railways, airports, bridges and other buildings. With the application of finite element analysis methods, optimization algorithms and computer technology, the research on the optimal design of multi-bar truss structures has also made great progress.
  • the high dimensionality of parameter space, large amount of search, and numerous local optimum points bring challenges to the uncertainty analysis of high-dimensional truss structures. Meanwhile, most existing surrogate modeling techniques cannot scale to high-dimensional problems, and the number of training samples grows exponentially with the input dimensionality.
  • the purpose of the present invention is to solve the problem that multi-bar trusses have high design parameter dimensions and low precision of ordinary proxy models, provide a method for constructing truss structure fusion proxy models based on transfer learning, and realize lightweight design of trusses.
  • the multi-bar truss is built by connecting rods.
  • Each rod has its own cross-sectional area and material. Different rods can be set with different cross-sectional areas and materials, or the same cross-sectional area and material can be set.
  • step 2) Input the source data set obtained in step 1) to construct a finite element model of a multi-bar truss to obtain a low-fidelity model (LFM) and a high-fidelity model (HFM);
  • LFM low-fidelity model
  • HFM high-fidelity model
  • the number of low-fidelity points is far greater than the number of high-fidelity points, specifically ten times or more.
  • the error of the fused proxy model is calculated.
  • the model accuracy was evaluated by calculating the root mean square error RRMSE and R2 of the predicted value and the true value of the test set.
  • the actual cross-sectional area and material of the multi-bar truss are input into the fusion proxy model to process the output target data, which is used to represent the stress prediction results and the displacement of the truss nodes, and then pass the genetic
  • the optimization algorithm realizes the optimal design of the multi-bar truss under the constraint of node displacement, that is, realizes light weight.
  • the uncertainty quantification is specifically to set the cross-sectional area of the bar to conform to the normal distribution A i ⁇ N(10 -4 ,10 -5 ), and pre-select three alternative materials.
  • the three alternative materials are low alloy steel Q235, common structural steel Q345 and 45#.
  • the 2) is specifically,
  • the input of the design parameters after the uncertainty quantification and the large grid size are used to construct the low-fidelity model by using the coarse-grid multi-bar truss finite element model; the specific implementation of the smaller grid size is 10cm.
  • the input of the design parameters after the uncertainty quantification and the smaller grid size are used to construct the high-fidelity model by using the fine-grid multi-bar truss finite element model.
  • a smaller grid size of 0.1 cm was implemented.
  • the invention establishes a low-fidelity model and a high-fidelity model at the same time.
  • the high-fidelity model has higher model accuracy than the low-fidelity model, but the low-fidelity model can obtain a large number of sample points in the design space in a shorter time, and has the advantages of Higher model computational efficiency.
  • the deep neural network DNN model in the above 4) is a neural network composed of multiple hidden layers. Each layer of the neural network contains different parameters and is connected to the next layer.
  • the input of the jth layer is transformed by the activation function is the output signal, the jth hidden layer is calculated as:
  • W (j) and b (j) are the weights and deviations before the deep neural network DNN model training
  • j represents the position of the hidden layer
  • z (j) represents the output of the jth hidden layer
  • L represents the hidden layer
  • f() is the activation function.
  • the linear rectified unit (ReLU) function is used as the activation function of each layer.
  • the loss function is used for comparative optimization, the error value is calculated for each neuron in the output layer, and the model parameter ⁇ is obtained by minimizing the loss function.
  • the model parameter ⁇ contains the weight W (j) and bias b (j) , the loss function is computed as:
  • E( ⁇ ) represents the final loss function output value corresponding to ⁇
  • i represents the ordinal number of the training data
  • E i represents the loss function value corresponding to the i-th group of training data
  • N represents the total number of training data groups
  • the stochastic gradient descent algorithm is used to optimize the solution to obtain the parameters of the weight W (j) and deviation b (j) of the deep neural network DNN model, and the parameters are updated through the adaptive moment Adam algorithm.
  • the optimal design of the multi-bar truss is realized under the constraints of the node displacement, which is calculated according to the following formula:
  • M L 1 ⁇ A 1 2 +L 2 ⁇ A 2 2 +L 3 ⁇ A 3 2 +...+L i ⁇ A i 2
  • M represents the total mass of the truss
  • i represents the i-th rod group
  • L i represents the length of the i-th rod group
  • a i represents the cross-sectional area of the i-th rod group
  • represents the material density. Therefore, the weight reduction of the truss is realized through the adaptive genetic algorithm.
  • the present invention extracts the knowledge of the source domain (Source Domain) from a large-scale low-fidelity data set, and constructs a pre-training model. Then, the learned knowledge is transferred to the new model, which is retrained with the information of the target dataset (small-scale high-fidelity dataset). Finally, optimize the hyperparameters of the retrained network, and use hyperparameter optimization to alleviate overfitting, that is, the choice of learning rate, momentum factor, activation function and number of nodes. Such an approach could ultimately lower the barriers to using transfer learning for institutional reliability analysis.
  • the present invention starts from the higher dimensionality of the design parameters, and uses transfer learning to pre-train the deep neural network through low-fidelity sampling points on the problem of high acquisition cost of high-fidelity sampling points, ensuring the generalization ability of the deep neural network At the same time, it reduces the construction cost of the neural network agent model;
  • the fusion surrogate model can still achieve higher model calculation accuracy while ensuring low cost.
  • a high-precision proxy model can also be constructed;
  • Fig. 1 is method flowchart of the present invention
  • Fig. 2 is the multi-bar truss schematic diagram of embodiment
  • Fig. 3 is a comparison diagram of fusion model and common multi-perceptron neural network
  • Figure 4 is a comparison diagram of the optimization of the number of fusion model nodes
  • Figure 5 is a comparison diagram of the fusion model activation function optimization
  • Fig. 6 is a comparison chart of fusion model learning function optimization
  • Fig. 7 is a schematic diagram of a 10-bar truss structure
  • Fig. 8 is a schematic diagram of a 25-bar truss structure
  • Fig. 9 is a schematic diagram of a 72-bar truss structure
  • Fig. 10 is the cloud diagram of finite element analysis under sparse mesh division of ten-bar truss
  • Figure 11 is the cloud diagram of the finite element analysis under the fine mesh division of the ten-bar truss.
  • This embodiment takes a 10-bar truss as an example, and builds a multi-fidelity proxy model based on the idea of transfer learning. This method can also be applied to multi-bar truss structures such as 25-bar, 72-bar, and 200-bar.
  • FIG. 2 An example of a ten-bar truss is shown in Figure 2, which shows the geometry, load and support conditions of the truss.
  • the material density and elastic modulus are 0.1lb/in 3 (2768.0kg/m 3 ) and 10000ksi (68950MPa) respectively , Applied loads P1, P2, P3 are shown in the figure.
  • a dataset is divided into two parts, a training set and a test set.
  • the former is a set of examples for learning, and the latter is a new set of data just for evaluating generalization.
  • the training set constructed by the ten-bar truss fusion model is generated by the random distribution of the design variables (cross-sectional area, applied load) of the multi-bar truss bar set, and the cross-sectional area of the bar set ranges from 0.1 to 35.0in2 (from 0.6 to 225.8 cm2), all input data were normalized with respect to the regularized maximum cross-sectional area of 35.0. And according to the finite element analysis grid division with different densities (0.1cm2 and 10cm2 were used as the basic grid division), 1000 sets of low-fidelity data sets and 100 sets of high-fidelity data sets were obtained for training neural networks.
  • Fig. 10 shows the cloud diagram of the finite element analysis under the ten-bar truss sparse mesh division
  • Fig. 11 shows the fine mesh of the ten-bar truss Cloud diagram of finite element analysis under grid division. 1000 sets of low-fidelity data sets were used for network pre-training, and the network parameters of the first (n-1) layer of the hidden layer were retained by using model fusion technology, and 100 sets of high-fidelity data sets were used for retraining, and finally a model based on ten Fusion Modeling of Rod-Truss Structures.
  • Figure 3 compares the model fusion method (training data is 1000 sets of low-fidelity data sets and 100 sets of high-fidelity data sets) and the general multi-perceptron neural network (training data is 1000 sets of high-fidelity data sets) method as the number of iterations increases. The process of gradually improving the accuracy. It can be seen that when the cost of building high-fidelity data sets is high for the model fusion method, using a small number of high-fidelity data sets can also make the neural network of model fusion have higher accuracy and better generalization ability. In addition, the hyperparameters (number of nodes, activation function, and learning function) of the fusion model can be optimized.
  • Figure 4 Compare the impact of different numbers of nodes, activation functions, and learning functions on the accuracy of the fusion model to further improve The computational efficiency and accuracy of the fusion model are improved.
  • the final fusion model obtained is that the number of hidden layer nodes is 25, the activation function is logsig, and the optimal fusion model can be obtained when the learning function is trainlm.
  • the genetic optimization algorithm is used to optimize the design parameters of the ten-bar truss under the constraints of cost and node displacement to achieve the lightweight design requirements.
  • the fusion model is not only suitable for the construction of proxy models for ten-bar trusses (as shown in Figure 7), but also for the construction of proxy models for multi-bar trusses such as 25 and 72 bars.
  • the structures of 25 and 72 bars are shown in Figure 8.
  • Figure 9 shows.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A truss stress prediction and weight lightening method based on a transfer learning fusion model. The method comprises: taking the cross-sectional area and a material as design parameters, and performing uncertainty quantification to establish a source data set; inputting the source data set into a finite element model for constructing a multi-linkage truss, so as to obtain low- and high-fidelity models; performing random sampling on the low- and high-fidelity models to obtain fidelity points, so as to construct and obtain target data; inputting data of the low-fidelity model into a deep neural network (DNN) model for preliminary training, so as to determine model parameters; retaining the first (n-1) layers, initializing the last layer, re-training and correcting data of the high-fidelity model, and optimizing the number of network nodes, so as to obtain a fusion agent model; and inputting actual data into the fusion agent model for processing, and outputting the target data. The method reduces construction costs while guaranteeing the generalization capability of a deep neural network by pre-training the deep neural network using low-fidelity sampling points; and when the number of high-fidelity sampling points is small, a high-precision agent model can also be constructed, thereby improving the reliability of a weight-lightening design for a truss structure.

Description

一种基于迁移学习融合模型的桁架应力预测及轻量化方法A truss stress prediction and lightweight method based on transfer learning fusion model 技术领域technical field
本发明涉及桁架有限元处理方法,特别是一种基于迁移学习融合模型对桁架节点目标参数进行预测及轻量化设计方法。The invention relates to a finite element processing method for a truss, in particular to a method for predicting target parameters of a truss node and lightweight design based on a transfer learning fusion model.
背景技术Background technique
桁架结构,是一种高性能结构,便于结构设计。由于桁架结构特殊的优点,使其广泛应用于高铁、机场、桥梁等建筑物中。随着有限元分析方法、优化算法和计算机技术的应用,多杆桁架结构优化设计的研究也取得了巨大发展。参数空间维度高、搜索量大、局部最优点众多为高维桁架结构不确定性分析带来了挑战。同时,大多数现有的代理建模技术无法扩展到高维问题,训练样本数量随着输入维度的增长呈指数增长。The truss structure is a high-performance structure that facilitates structural design. Due to the special advantages of the truss structure, it is widely used in high-speed railways, airports, bridges and other buildings. With the application of finite element analysis methods, optimization algorithms and computer technology, the research on the optimal design of multi-bar truss structures has also made great progress. The high dimensionality of parameter space, large amount of search, and numerous local optimum points bring challenges to the uncertainty analysis of high-dimensional truss structures. Meanwhile, most existing surrogate modeling techniques cannot scale to high-dimensional problems, and the number of training samples grows exponentially with the input dimensionality.
发明内容Contents of the invention
本发明的目的在于解决多杆桁架在设计参数维数较高,普通代理模型精度不高的问题,提供了一种基于迁移学习的桁架结构融合代理模型构建方法,并实现桁架的轻量化设计。The purpose of the present invention is to solve the problem that multi-bar trusses have high design parameter dimensions and low precision of ordinary proxy models, provide a method for constructing truss structure fusion proxy models based on transfer learning, and realize lightweight design of trusses.
为实现上述目的,如图1所示,本发明的技术方案如下:In order to achieve the above object, as shown in Figure 1, the technical scheme of the present invention is as follows:
1)针对多杆桁架,选择以横截面积和材料作为设计参数,对多杆桁架的设计参数进行不确定性量化,建立源数据集;1) For the multi-bar truss, choose the cross-sectional area and material as the design parameters, quantify the uncertainty of the design parameters of the multi-bar truss, and establish the source data set;
多杆桁架是由杆件连接搭建而成,每根杆分别单独设置横截面积和材料,不同的杆件可以设置不同的横截面积、材料,也可以设置相同的横截面积和材料。The multi-bar truss is built by connecting rods. Each rod has its own cross-sectional area and material. Different rods can be set with different cross-sectional areas and materials, or the same cross-sectional area and material can be set.
2)将步骤1)获得的源数据集输入构建多杆桁架的有限元模型,获得低保真模型(LFM)和高保真模型(HFM);2) Input the source data set obtained in step 1) to construct a finite element model of a multi-bar truss to obtain a low-fidelity model (LFM) and a high-fidelity model (HFM);
3)针对低保真模型和高保真模型进行随机采样,获得低保真模型中的m个低保真点和高保真模型中的n个高保真点,构建获得目标数据;低保真点和高保真点中包含了应力的数据,能够用于表征应力和桁架节点的位移。3) Perform random sampling for the low-fidelity model and the high-fidelity model, obtain m low-fidelity points in the low-fidelity model and n high-fidelity points in the high-fidelity model, and construct the target data; the low-fidelity points and Stress data is included in the high-fidelity points, which can be used to characterize the stress and displacement of the truss nodes.
所述3)中,低保真点的个数远远大于高保真点的个数,具体是大于十倍以上。In the above 3), the number of low-fidelity points is far greater than the number of high-fidelity points, specifically ten times or more.
4)构建深度神经网络DNN模型,将低保真模型的源数据集和目标数据输入到深度神经网络DNN模型进行初步训练确定模型参数θ={W (j),b (j)} L+1,初始化DNN; 4) Construct a deep neural network DNN model, input the source data set and target data of the low-fidelity model into the deep neural network DNN model for preliminary training and determine the model parameters θ={W (j) ,b (j) } L+1 , initialize the DNN;
5)根据4)确定模型参数θ={W (j),b (j)} L+1后的深度神经网络DNN模型,保留其中前n-1层的网络结构,n表示为深度神经网络DNN模型中的层总数,初始化最后一层的网络结构中的参数,参数为权值和偏差; 5) According to 4), determine the deep neural network DNN model after the model parameter θ={W (j) , b (j) } L+1 , retain the network structure of the first n-1 layers, and n is represented by the deep neural network DNN The total number of layers in the model, initialize the parameters in the network structure of the last layer, the parameters are weights and deviations;
将高保真模型的源数据集和目标数据输入再次进行训练,对最后一层网络结构的参数修正,比较不同网络节点数对模型精度的影响,在设定范围内获取最优的网络节点数量,得到应用于桁架的融合代理模型;Input the source data set and target data of the high-fidelity model to train again, modify the parameters of the last layer of network structure, compare the influence of different network node numbers on the model accuracy, and obtain the optimal network node number within the set range, Get the fused proxy model applied to the truss;
具体实施中,计算融合代理模型的误差。通过计算测试集的预测值与真实值的均方根误差RRMSE和R 2评估模型精度。 In a specific implementation, the error of the fused proxy model is calculated. The model accuracy was evaluated by calculating the root mean square error RRMSE and R2 of the predicted value and the true value of the test set.
6)在得到满足精度要求的融合代理模型后,将实际的多杆桁架的横截面积和材料输入到融合代理模型处理输出目标数据,用于表征应力预测结果和桁架节点的位移,再通过遗传优化算法在满足节点位移的约束下实现多杆桁架的最优设计,即实现轻量化。6) After obtaining the fusion proxy model that meets the accuracy requirements, the actual cross-sectional area and material of the multi-bar truss are input into the fusion proxy model to process the output target data, which is used to represent the stress prediction results and the displacement of the truss nodes, and then pass the genetic The optimization algorithm realizes the optimal design of the multi-bar truss under the constraint of node displacement, that is, realizes light weight.
所述1)中,不确定性量化具体是设置杆件的横截面积符合正态分布A i~N(10 -4,10 -5),预先选择三种备选材料。具体实施中,三种备选材料分别为低合金钢Q235、普通结构钢Q345和45#。 In the above 1), the uncertainty quantification is specifically to set the cross-sectional area of the bar to conform to the normal distribution A i ∼N(10 -4 ,10 -5 ), and pre-select three alternative materials. In specific implementation, the three alternative materials are low alloy steel Q235, common structural steel Q345 and 45#.
所述2)具体为,The 2) is specifically,
将不确定性量化后的设计参数输入和较大的网格尺寸使用粗网格多杆桁架有限元模型,构建获得低保真模型;具体实施的较小的网格尺寸为10cm。The input of the design parameters after the uncertainty quantification and the large grid size are used to construct the low-fidelity model by using the coarse-grid multi-bar truss finite element model; the specific implementation of the smaller grid size is 10cm.
将不确定性量化后的设计参数输入和较小的网格尺寸使用细网格多杆桁架有限元模型,构建获得高保真模型。具体实施的较小的网格尺寸为0.1cm。The input of the design parameters after the uncertainty quantification and the smaller grid size are used to construct the high-fidelity model by using the fine-grid multi-bar truss finite element model. A smaller grid size of 0.1 cm was implemented.
本发明同时建立了低保真模型和高保真模型,高保真模型具有比低保真模型更高的模型精度,但低保真模型可以在更短时间内得到设计空间内的大量样本点,具有更高的模型计算效率。The invention establishes a low-fidelity model and a high-fidelity model at the same time. The high-fidelity model has higher model accuracy than the low-fidelity model, but the low-fidelity model can obtain a large number of sample points in the design space in a shorter time, and has the advantages of Higher model computational efficiency.
所述4)中的深度神经网络DNN模型是由多个隐藏层组成的神经网络,神经网络每一层都包含不同的参数,并连接到下一层,通过激活函数将第j层的输入转化为输出信号,第j个隐藏层计算为:The deep neural network DNN model in the above 4) is a neural network composed of multiple hidden layers. Each layer of the neural network contains different parameters and is connected to the next layer. The input of the jth layer is transformed by the activation function is the output signal, the jth hidden layer is calculated as:
Figure PCTCN2021141474-appb-000001
Figure PCTCN2021141474-appb-000001
其中,W (j),b (j)分别是在深度神经网络DNN模型训练之前的权值和偏差,j表示隐藏层位置,z (j)表示第j个隐藏层的输出,L表示隐藏层的数量,f()是激活函数。 Among them, W (j) and b (j) are the weights and deviations before the deep neural network DNN model training, j represents the position of the hidden layer, z (j) represents the output of the jth hidden layer, and L represents the hidden layer The number of , f() is the activation function.
在深度神经网络DNN模型中,采用线性整流单元(ReLU)函数作为每一层的激活函数。In the deep neural network DNN model, the linear rectified unit (ReLU) function is used as the activation function of each layer.
在所述4)训练过程中,使用损失函数进行比较优化,为输出层中的每个神经元计算误差值,通过最小化损失函数来获得模型参数θ,模型参数θ包含权值W (j)和偏差b (j),损失函数计算为: In the 4) training process, the loss function is used for comparative optimization, the error value is calculated for each neuron in the output layer, and the model parameter θ is obtained by minimizing the loss function. The model parameter θ contains the weight W (j) and bias b (j) , the loss function is computed as:
Figure PCTCN2021141474-appb-000002
Figure PCTCN2021141474-appb-000002
其中,E(θ)表示对应θ下的最终损失函数输出值,i表示训练数据的序数,E i表示第i组训练数据对应的损失函数值,N表示训练数据的总组数,
Figure PCTCN2021141474-appb-000003
表示第N组训练数据的真实输出值,
Figure PCTCN2021141474-appb-000004
表示第N组训练数据的预测输出值,
Figure PCTCN2021141474-appb-000005
表示第N组训练数据的真实输入值;
Among them, E(θ) represents the final loss function output value corresponding to θ, i represents the ordinal number of the training data, E i represents the loss function value corresponding to the i-th group of training data, N represents the total number of training data groups,
Figure PCTCN2021141474-appb-000003
Indicates the real output value of the Nth group of training data,
Figure PCTCN2021141474-appb-000004
Indicates the predicted output value of the Nth group of training data,
Figure PCTCN2021141474-appb-000005
Indicates the real input value of the Nth group of training data;
最小化损失函数时使用随机梯度下降算法来优化求解获得深度神经网络DNN模型的权值W (j)和偏差b (j)的参数,通过自适应矩Adam算法进行参数更新。 When minimizing the loss function, the stochastic gradient descent algorithm is used to optimize the solution to obtain the parameters of the weight W (j) and deviation b (j) of the deep neural network DNN model, and the parameters are updated through the adaptive moment Adam algorithm.
通过遗传优化算法,在满足节点位移的约束下,实现多杆桁架的最优设计,具体按照以下公式计算:Through the genetic optimization algorithm, the optimal design of the multi-bar truss is realized under the constraints of the node displacement, which is calculated according to the following formula:
M=L 1ρA 1 2+L 2ρA 2 2+L 3ρA 3 2+...+L iρA i 2 M=L 1 ρA 1 2 +L 2 ρA 2 2 +L 3 ρA 3 2 +...+L i ρA i 2
其中,M表示桁架总质量,i表示第i组杆组,L i表示第i组杆组的长度,A i表示第i组杆组的横截面积,ρ表示材料密度。由此通过自适应遗传算法实现桁架轻量化。 Among them, M represents the total mass of the truss, i represents the i-th rod group, L i represents the length of the i-th rod group, A i represents the cross-sectional area of the i-th rod group, and ρ represents the material density. Therefore, the weight reduction of the truss is realized through the adaptive genetic algorithm.
首先,本发明从大规模的低保真度数据集中提取源领域(Source Domain)的知识,构建预训练模型。然后,将学习到的知识转移到新模型中,用目标数据集的信息(小规模的高保真度数据集)进行重训练。最后,进行重训练网络超参数优化,使用超参数优化缓解过拟合,即学习率选择、动量因子、激活函数和节点数量。这样方法最终能降低将迁移学习用于机构可靠性分析的障碍。First, the present invention extracts the knowledge of the source domain (Source Domain) from a large-scale low-fidelity data set, and constructs a pre-training model. Then, the learned knowledge is transferred to the new model, which is retrained with the information of the target dataset (small-scale high-fidelity dataset). Finally, optimize the hyperparameters of the retrained network, and use hyperparameter optimization to alleviate overfitting, that is, the choice of learning rate, momentum factor, activation function and number of nodes. Such an approach could ultimately lower the barriers to using transfer learning for institutional reliability analysis.
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
1)本发明从设计参数维数较高出发,在高保真采样点获取成本较高的问题上,运用迁移学习,通过低保真采样点预训练深度神经网络,保证了深度神经网络泛化能力的同时,降低了神经网络代理模型构建成本;1) The present invention starts from the higher dimensionality of the design parameters, and uses transfer learning to pre-train the deep neural network through low-fidelity sampling points on the problem of high acquisition cost of high-fidelity sampling points, ensuring the generalization ability of the deep neural network At the same time, it reduces the construction cost of the neural network agent model;
2)相较于传统代理模型,融合代理模型在保证低成本的同时,仍能实现较高的模型计算精度。在高保真采样点数量较少时,也能构建高精度的代理模型;2) Compared with the traditional surrogate model, the fusion surrogate model can still achieve higher model calculation accuracy while ensuring low cost. When the number of high-fidelity sampling points is small, a high-precision proxy model can also be constructed;
3)使用了迁移学习的思想,大大减小了设计人员与计算机的负担,大大提高了计算效率,提高了桁架结构轻量化设计的可靠性,也更加符合实际工程中的应用。3) The idea of transfer learning is used, which greatly reduces the burden on designers and computers, greatly improves calculation efficiency, improves the reliability of lightweight design of truss structures, and is more in line with practical engineering applications.
附图说明Description of drawings
图1为本发明的方法流程图;Fig. 1 is method flowchart of the present invention;
图2为实施例的多杆桁架示意图;Fig. 2 is the multi-bar truss schematic diagram of embodiment;
图3为融合模型与普通多感知机神经网络对比图;Fig. 3 is a comparison diagram of fusion model and common multi-perceptron neural network;
图4为融合模型节点数优化对比图;Figure 4 is a comparison diagram of the optimization of the number of fusion model nodes;
图5为融合模型激活函数优化对比图;Figure 5 is a comparison diagram of the fusion model activation function optimization;
图6为融合模型学习函数优化对比图;Fig. 6 is a comparison chart of fusion model learning function optimization;
图7为10杆桁架结构示意图;Fig. 7 is a schematic diagram of a 10-bar truss structure;
图8为25杆桁架结构示意图;Fig. 8 is a schematic diagram of a 25-bar truss structure;
图9为72杆桁架结构示意图;Fig. 9 is a schematic diagram of a 72-bar truss structure;
图10为十杆桁架稀疏网格划分下的有限元分析云图;Fig. 10 is the cloud diagram of finite element analysis under sparse mesh division of ten-bar truss;
图11为十杆桁架精细网格划分下的有限元分析云图。Figure 11 is the cloud diagram of the finite element analysis under the fine mesh division of the ten-bar truss.
具体实施方式Detailed ways
下面结合附图和具体实施对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and specific implementation.
如图1所示,按照本发明完整方法实施的实施例及其实施情况如下:As shown in Figure 1, according to the embodiment that the complete method of the present invention implements and its implementation situation are as follows:
本实施例以10杆桁架为例,基于迁移学习思想的多保真度代理模型构建,此方法也可应用于25杆、72杆、200杆等多杆桁架结构。This embodiment takes a 10-bar truss as an example, and builds a multi-fidelity proxy model based on the idea of transfer learning. This method can also be applied to multi-bar truss structures such as 25-bar, 72-bar, and 200-bar.
如图2所示是十杆桁架的例子,显示了该桁架的几何形状、载荷和支撑条件,材料密度和弹性模量分别为0.1lb/in 3(2768.0kg/m 3)和10000ksi(68950MPa),外加载荷P1、P2、P3如图所示。 An example of a ten-bar truss is shown in Figure 2, which shows the geometry, load and support conditions of the truss. The material density and elastic modulus are 0.1lb/in 3 (2768.0kg/m 3 ) and 10000ksi (68950MPa) respectively , Applied loads P1, P2, P3 are shown in the figure.
在机器学习问题中,数据集分为两部分,训练集和测试集。前者是一组用于学习的示例,后者是一组仅用于评估泛化的新数据。In a machine learning problem, a dataset is divided into two parts, a training set and a test set. The former is a set of examples for learning, and the latter is a new set of data just for evaluating generalization.
在本发明中,十杆桁架融合模型构建的训练集由多杆桁架杆组的设计变量(横截面积、外加载荷)的随机分布生成,杆组的横截面积范围从0.1到35.0in2(从0.6到225.8cm2),所有输入数据都相对于正则化的最大横截面积35.0进行了标准化。并根据有限元分析网格划分疏密不同(分别是以0.1cm2和10cm2作为基础网格进行划分),得到了1000组低保真数据集和100组高保真数据集,用于训练神经网络。其中,网格划分方式及相应网格划分下的有限元分析示意图如图10、图11所示,图10表示十杆桁架稀疏网格划分下有限元分析云图,图11表示十杆桁架精细网格划分下有限元分析云图。用1000组低保真数据集进行了网络预训练,利用模型融合技术,保留了隐藏层前(n-1)层的网络参数,用100组高保真数据集进行重训练,最终得到了基于十杆桁架结构的融合模型。In the present invention, the training set constructed by the ten-bar truss fusion model is generated by the random distribution of the design variables (cross-sectional area, applied load) of the multi-bar truss bar set, and the cross-sectional area of the bar set ranges from 0.1 to 35.0in2 (from 0.6 to 225.8 cm2), all input data were normalized with respect to the regularized maximum cross-sectional area of 35.0. And according to the finite element analysis grid division with different densities (0.1cm2 and 10cm2 were used as the basic grid division), 1000 sets of low-fidelity data sets and 100 sets of high-fidelity data sets were obtained for training neural networks. Among them, the grid division method and the schematic diagram of the finite element analysis under the corresponding grid division are shown in Fig. 10 and Fig. 11. Fig. 10 shows the cloud diagram of the finite element analysis under the ten-bar truss sparse mesh division, and Fig. 11 shows the fine mesh of the ten-bar truss Cloud diagram of finite element analysis under grid division. 1000 sets of low-fidelity data sets were used for network pre-training, and the network parameters of the first (n-1) layer of the hidden layer were retained by using model fusion technology, and 100 sets of high-fidelity data sets were used for retraining, and finally a model based on ten Fusion Modeling of Rod-Truss Structures.
图3对比了模型融合方法(训练数据为1000组低保真数据集和100组高保真数据集)和普通多感知机神经网络(训练数据为1000组高保真数据集)方法随迭代次数增加模型精度逐渐提高的过程。可以看出模型融合方法在处理高保真数据集构建成本较高时,利用少量高保真数据集也能使模型融合的神经网络具有较高精确度和较好的泛化能力。此外还可以进行融合模型的超参数(节点数量、激活函数、学习函数)优化,图4、图5、图6对比了不同节点数、激活函数、学习函数对对融合模型精度的影响,进一步提高了融合模型计算效率和准确性。并最终得到的融合模型为隐藏层节点数为25,激活函数为logsig,学习函数为trainlm时可以得到最优的融合模型。在该融合模型的指导下,运用遗传优化算法,在成本和节点位移的约束下,优化十杆桁架的设计参数,实现轻量化设计要求。此外,融合模型并不仅适用于十杆桁架的代理模型构建(图7所示),还适用于25杆、72杆等多杆桁架的代理模型构建,25杆、72杆的结构如图8、图9所示。Figure 3 compares the model fusion method (training data is 1000 sets of low-fidelity data sets and 100 sets of high-fidelity data sets) and the general multi-perceptron neural network (training data is 1000 sets of high-fidelity data sets) method as the number of iterations increases. The process of gradually improving the accuracy. It can be seen that when the cost of building high-fidelity data sets is high for the model fusion method, using a small number of high-fidelity data sets can also make the neural network of model fusion have higher accuracy and better generalization ability. In addition, the hyperparameters (number of nodes, activation function, and learning function) of the fusion model can be optimized. Figure 4, Figure 5, and Figure 6 compare the impact of different numbers of nodes, activation functions, and learning functions on the accuracy of the fusion model to further improve The computational efficiency and accuracy of the fusion model are improved. And the final fusion model obtained is that the number of hidden layer nodes is 25, the activation function is logsig, and the optimal fusion model can be obtained when the learning function is trainlm. Under the guidance of the fusion model, the genetic optimization algorithm is used to optimize the design parameters of the ten-bar truss under the constraints of cost and node displacement to achieve the lightweight design requirements. In addition, the fusion model is not only suitable for the construction of proxy models for ten-bar trusses (as shown in Figure 7), but also for the construction of proxy models for multi-bar trusses such as 25 and 72 bars. The structures of 25 and 72 bars are shown in Figure 8. Figure 9 shows.

Claims (6)

  1. 一种基于迁移学习融合模型的桁架应力预测及轻量化方法,其特征在于:A truss stress prediction and lightweight method based on transfer learning fusion model, characterized in that:
    1)针对多杆桁架,选择以横截面积和材料作为设计参数,对多杆桁架的设计参数进行不确定性量化,建立源数据集;1) For the multi-bar truss, choose the cross-sectional area and material as the design parameters, quantify the uncertainty of the design parameters of the multi-bar truss, and establish the source data set;
    2)将步骤1)获得的源数据集输入构建多杆桁架的有限元模型,获得低保真模型和高保真模型;2) Input the source data set obtained in step 1) into the finite element model of building a multi-bar truss to obtain a low-fidelity model and a high-fidelity model;
    3)针对低保真模型和高保真模型进行随机采样,获得低保真模型中的m个低保真点和高保真模型中的n个高保真点,构建获得目标数据;3) randomly sampling the low-fidelity model and the high-fidelity model, obtaining m low-fidelity points in the low-fidelity model and n high-fidelity points in the high-fidelity model, and constructing and obtaining target data;
    4)构建深度神经网络DNN模型,将低保真模型的源数据集和目标数据输入到深度神经网络DNN模型进行初步训练确定模型参数θ={W (j),b (j)} L+14) Construct a deep neural network DNN model, input the source data set and target data of the low-fidelity model into the deep neural network DNN model for preliminary training and determine the model parameters θ={W (j) ,b (j) } L+1 ;
    5)根据4)确定模型参数θ={W (j),b (j)} L+1后的深度神经网络DNN模型,保留其中前n-1层的网络结构,n表示为深度神经网络DNN模型中的层总数,初始化最后一层的网络结构中的参数; 5) According to 4), determine the deep neural network DNN model after the model parameter θ={W (j) , b (j) } L+1 , retain the network structure of the first n-1 layers, and n is represented by the deep neural network DNN The total number of layers in the model, initialize the parameters in the network structure of the last layer;
    将高保真模型的源数据集和目标数据输入再次进行训练,对最后一层网络结构的参数修正,比较不同网络节点数对模型精度的影响,在设定范围内获取最优的网络节点数量,得到应用于桁架的融合代理模型;Input the source data set and target data of the high-fidelity model to train again, modify the parameters of the last layer of network structure, compare the influence of different network node numbers on the model accuracy, and obtain the optimal network node number within the set range, Get the fused proxy model applied to the truss;
    6)在得到的融合代理模型后,将实际的多杆桁架的横截面积和材料输入到融合代理模型处理输出目标数据,用于表征应力预测结果和桁架节点的位移,再通过遗传优化算法在满足节点位移的约束下实现多杆桁架的最优设计。6) After the fusion proxy model is obtained, the actual cross-sectional area and material of the multi-bar truss are input into the fusion proxy model to process the output target data, which is used to represent the stress prediction results and the displacement of the truss nodes, and then through the genetic optimization algorithm in the The optimal design of multi-bar truss is realized under the constraints of node displacement.
  2. 根据权利要求1所述的一种基于迁移学习融合模型的桁架应力预测及轻量化方法,其特征在于:所述1)中,不确定性量化具体是设置杆件的横截面积符合正态分布A i~N(10 -4,10 -5),预先选择三种备选材料。 A truss stress prediction and lightweight method based on transfer learning fusion model according to claim 1, characterized in that: in said 1), the uncertainty quantification is specifically to set the cross-sectional area of the bar to conform to the normal distribution A i ~N(10 -4 ,10 -5 ), pre-select three candidate materials.
  3. 根据权利要求1所述的一种基于迁移学习融合模型的桁架应力预测及轻量化方法,其特征在于:所述2)具体为,A kind of truss stress prediction and lightweight method based on transfer learning fusion model according to claim 1, characterized in that: said 2) is specifically,
    将不确定性量化后的设计参数输入和较大的网格尺寸使用粗网格多杆桁架有限元模型,构建获得低保真模型;Input the design parameters after quantifying the uncertainty and use the coarse grid multi-bar truss finite element model to construct a low-fidelity model;
    将不确定性量化后的设计参数输入和较小的网格尺寸使用细网格多杆桁架有限元模型,构建获得高保真模型。The input of the design parameters after the uncertainty quantification and the smaller grid size are used to construct the high-fidelity model by using the fine-grid multi-bar truss finite element model.
  4. 根据权利要求1所述的一种基于迁移学习融合模型的桁架应力预测及轻量化方法,其特征在于:所述4)中的深度神经网络DNN模型是由多个隐藏层组成的神经网络,神经网络每一层都包含不同的参数,并连接到下一层,通过激活函数将第j层的输入转化为输出信号,第j个隐藏层计算为:A kind of truss stress prediction and lightweight method based on migration learning fusion model according to claim 1, characterized in that: the deep neural network DNN model in said 4) is a neural network composed of multiple hidden layers, neural network Each layer of the network contains different parameters and is connected to the next layer. The input of the jth layer is converted into an output signal through the activation function. The jth hidden layer is calculated as:
    Figure PCTCN2021141474-appb-100001
    Figure PCTCN2021141474-appb-100001
    其中,W (j),b (j)分别是在深度神经网络DNN模型的权值和偏差,j表示隐藏层位置,z (j)表示第j个隐藏层的输出,L表示隐藏层的数量,f()是激活函数。 Among them, W (j) and b (j) are the weights and deviations of the DNN model in the deep neural network, j represents the position of the hidden layer, z (j) represents the output of the jth hidden layer, and L represents the number of hidden layers , f() is the activation function.
  5. 根据权利要求1所述的一种基于迁移学习融合模型的桁架应力预测及轻量化方法,其特征在于:在所述4)训练过程中,使用损失函数进行比较优化,为输出层中的每个神经元计算误差值,通过最小化损失函数来获得模型参数θ,模型参数θ包含权值W (j)和偏差b (j),损失函数计算为: A kind of truss stress prediction and lightweight method based on transfer learning fusion model according to claim 1, characterized in that: in the 4) training process, use the loss function to compare and optimize, for each in the output layer The neuron calculates the error value, and the model parameter θ is obtained by minimizing the loss function. The model parameter θ includes the weight W (j) and the deviation b (j) . The loss function is calculated as:
    Figure PCTCN2021141474-appb-100002
    Figure PCTCN2021141474-appb-100002
    其中,E(θ)表示对应θ下的最终损失函数输出值,i表示训练数据的序数,E i表示第i组训练数据对应的损失函数值,N表示训练数据的总组数,
    Figure PCTCN2021141474-appb-100003
    表示第N组训练数据的真实输出值,
    Figure PCTCN2021141474-appb-100004
    表示第N组训练数据的预测输出值,
    Figure PCTCN2021141474-appb-100005
    表示第N组训练数据的真实输入值;
    Among them, E(θ) represents the final loss function output value corresponding to θ, i represents the ordinal number of the training data, E i represents the loss function value corresponding to the i-th group of training data, N represents the total number of training data groups,
    Figure PCTCN2021141474-appb-100003
    Indicates the real output value of the Nth group of training data,
    Figure PCTCN2021141474-appb-100004
    Indicates the predicted output value of the Nth group of training data,
    Figure PCTCN2021141474-appb-100005
    Indicates the real input value of the Nth group of training data;
    最小化损失函数时使用随机梯度下降算法来优化求解获得深度神经网络DNN模型的权值W (j)和偏差b (j)的参数,通过自适应矩Adam算法进行参数更新。 When minimizing the loss function, the stochastic gradient descent algorithm is used to optimize the solution to obtain the parameters of the weight W (j) and deviation b (j) of the deep neural network DNN model, and the parameters are updated through the adaptive moment Adam algorithm.
  6. 根据权利要求1所述的一种基于迁移学习融合模型的桁架应力预测及轻量化方法,其特征在于:通过遗传优化算法,在满足节点位移的约束下,实现多杆桁架的最优设计,具体按照以下公式计算:A truss stress prediction and lightweight method based on transfer learning fusion model according to claim 1, characterized in that: the optimal design of multi-bar truss is realized under the constraints of node displacement through genetic optimization algorithm, specifically Calculate according to the following formula:
    M=L 1ρA 1 2+L 2ρA 2 2+L 3ρA 3 2+...+L iρA i 2 M=L 1 ρA 1 2 +L 2 ρA 2 2 +L 3 ρA 3 2 +...+L i ρA i 2
    其中,M表示桁架总质量,i表示第i组杆组,L i表示第i组杆组的长度,A i表示第i组杆组的横截面积,ρ表示材料密度。 Among them, M represents the total mass of the truss, i represents the i-th rod group, L i represents the length of the i-th rod group, A i represents the cross-sectional area of the i-th rod group, and ρ represents the material density.
PCT/CN2021/141474 2021-12-21 2021-12-27 Truss stress prediction and weight lightening method based on transfer learning fusion model WO2023115596A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111572929.8A CN114239114A (en) 2021-12-21 2021-12-21 Truss stress prediction and lightweight method based on transfer learning fusion model
CN202111572929.8 2021-12-21

Publications (1)

Publication Number Publication Date
WO2023115596A1 true WO2023115596A1 (en) 2023-06-29

Family

ID=80760475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/141474 WO2023115596A1 (en) 2021-12-21 2021-12-27 Truss stress prediction and weight lightening method based on transfer learning fusion model

Country Status (2)

Country Link
CN (1) CN114239114A (en)
WO (1) WO2023115596A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116839783A (en) * 2023-09-01 2023-10-03 华东交通大学 Method for measuring stress value and deformation of automobile leaf spring based on machine learning
CN116976011A (en) * 2023-09-21 2023-10-31 中国空气动力研究与发展中心计算空气动力研究所 Low-high fidelity pneumatic data characteristic association depth composite network model and method
CN117275220A (en) * 2023-08-31 2023-12-22 云南云岭高速公路交通科技有限公司 Mountain expressway real-time accident risk prediction method based on incomplete data
CN117392613A (en) * 2023-12-07 2024-01-12 武汉纺织大学 Power operation safety monitoring method based on lightweight network
CN117709536A (en) * 2023-12-18 2024-03-15 东北大学 Accurate prediction method and system for deep recursion random configuration network industrial process
CN118133431A (en) * 2024-04-30 2024-06-04 北京航空航天大学 Multi-source data fusion type aircraft wing surface structure load identification method
CN118228369A (en) * 2024-05-24 2024-06-21 深圳联丰建设集团有限公司 Intelligent optimization method, device, equipment and storage medium for steel structure engineering

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114919819B (en) * 2022-06-01 2023-06-06 中迪机器人(盐城)有限公司 Automatic control method and system for steel belt film sticking
CN116776748B (en) * 2023-08-18 2023-11-03 中国人民解放军国防科技大学 Throat bolt type variable thrust engine throat bolt spray pipe configuration design knowledge migration optimization method
CN117973268B (en) * 2024-03-29 2024-06-07 中国空气动力研究与发展中心超高速空气动力研究所 Flow field multisource pneumatic data fusion model based on semi-supervised learning and training method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102126A (en) * 2018-08-30 2018-12-28 燕山大学 One kind being based on depth migration learning theory line loss per unit prediction model
WO2020263358A1 (en) * 2019-06-24 2020-12-30 Nanyang Technological University Machine learning techniques for estimating mechanical properties of materials
CN112182938A (en) * 2020-10-13 2021-01-05 上海交通大学 Mesoscopic structural part mechanical property prediction method based on transfer learning-multi-fidelity modeling
CN113240117A (en) * 2021-06-01 2021-08-10 大连理工大学 Variable fidelity transfer learning model establishing method
CN113408703A (en) * 2021-06-29 2021-09-17 中国科学院自动化研究所 Multi-modal big data machine automatic learning system based on nerves and symbols

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102126A (en) * 2018-08-30 2018-12-28 燕山大学 One kind being based on depth migration learning theory line loss per unit prediction model
WO2020263358A1 (en) * 2019-06-24 2020-12-30 Nanyang Technological University Machine learning techniques for estimating mechanical properties of materials
CN112182938A (en) * 2020-10-13 2021-01-05 上海交通大学 Mesoscopic structural part mechanical property prediction method based on transfer learning-multi-fidelity modeling
CN113240117A (en) * 2021-06-01 2021-08-10 大连理工大学 Variable fidelity transfer learning model establishing method
CN113408703A (en) * 2021-06-29 2021-09-17 中国科学院自动化研究所 Multi-modal big data machine automatic learning system based on nerves and symbols

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275220A (en) * 2023-08-31 2023-12-22 云南云岭高速公路交通科技有限公司 Mountain expressway real-time accident risk prediction method based on incomplete data
CN116839783A (en) * 2023-09-01 2023-10-03 华东交通大学 Method for measuring stress value and deformation of automobile leaf spring based on machine learning
CN116839783B (en) * 2023-09-01 2023-12-08 华东交通大学 Method for measuring stress value and deformation of automobile leaf spring based on machine learning
CN116976011A (en) * 2023-09-21 2023-10-31 中国空气动力研究与发展中心计算空气动力研究所 Low-high fidelity pneumatic data characteristic association depth composite network model and method
CN116976011B (en) * 2023-09-21 2023-12-15 中国空气动力研究与发展中心计算空气动力研究所 Low-high fidelity pneumatic data characteristic association depth composite network model and method
CN117392613A (en) * 2023-12-07 2024-01-12 武汉纺织大学 Power operation safety monitoring method based on lightweight network
CN117392613B (en) * 2023-12-07 2024-03-08 武汉纺织大学 Power operation safety monitoring method based on lightweight network
CN117709536A (en) * 2023-12-18 2024-03-15 东北大学 Accurate prediction method and system for deep recursion random configuration network industrial process
CN117709536B (en) * 2023-12-18 2024-05-14 东北大学 Accurate prediction method and system for deep recursion random configuration network industrial process
CN118133431A (en) * 2024-04-30 2024-06-04 北京航空航天大学 Multi-source data fusion type aircraft wing surface structure load identification method
CN118228369A (en) * 2024-05-24 2024-06-21 深圳联丰建设集团有限公司 Intelligent optimization method, device, equipment and storage medium for steel structure engineering

Also Published As

Publication number Publication date
CN114239114A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
WO2023115596A1 (en) Truss stress prediction and weight lightening method based on transfer learning fusion model
Afshari et al. Machine learning-based methods in structural reliability analysis: A review
Do et al. Material optimization of functionally graded plates using deep neural network and modified symbiotic organisms search for eigenvalue problems
Ran et al. Study on deformation prediction of landslide based on genetic algorithm and improved BP neural network
CN112084727A (en) Transition prediction method based on neural network
Mai et al. Physics-informed neural energy-force network: a unified solver-free numerical simulation for structural optimization
Xu et al. Short‐term traffic flow prediction based on whale optimization algorithm optimized BiLSTM_Attention
Kookalani et al. Structural analysis of GFRP elastic gridshell structures by particle swarm optimization and least square support vector machine algorithms
YiFei et al. Metamodel-assisted hybrid optimization strategy for model updating using vibration response data
CN117910120A (en) Buffeting response prediction method for wind-bridge system based on lightweight transducer
Zaheer et al. A review on developing optimization techniques in civil engineering
Naik et al. Indian monsoon rainfall classification and prediction using robust back propagation artificial neural network
Nguyen et al. Predicting shear strength of slender beams without reinforcement using hybrid gradient boosting trees and optimization algorithms
Qu et al. Improving parking occupancy prediction in poor data conditions through customization and learning to learn
Chong et al. Comparing data-driven and conventional airfoil shape design optimization
Qiu et al. Air traffic flow of genetic algorithm to optimize wavelet neural network prediction
El Mourabit Optimization of concrete beam bridges: development of software for design automation and cost optimization
Sun et al. Study on Form‐Finding of Cable‐Membrane Structures Based on Particle Swarm Optimization Algorithm
Zhang et al. Short-term traffic flow prediction model based on deep learning regression algorithm
Xiong et al. A new adaptive multi-fidelity metamodel method using meta-learning and Bayesian deep learning
Zhang et al. Ultimate axial strength prediction of concrete-filled double-skin steel tube columns using soft computing methods
Liu et al. Artificial Neural Network‐Based Method for Seismic Analysis of Concrete‐Filled Steel Tube Arch Bridges
Mučenski et al. Estimation of recycling capacity of multistorey building structures using artificial neural networks
Nikolos On the use of multiple surrogates within a differential evolution procedure for high-lift airfoil design
Kong et al. Hybrid machine learning with optimization algorithm and resampling methods for patch load resistance prediction of unstiffened and stiffened plate girders

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21968725

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE