CN114970344A - Packed tower pressure drop prediction method based on width migration learning - Google Patents

Packed tower pressure drop prediction method based on width migration learning Download PDF

Info

Publication number
CN114970344A
CN114970344A CN202210576257.6A CN202210576257A CN114970344A CN 114970344 A CN114970344 A CN 114970344A CN 202210576257 A CN202210576257 A CN 202210576257A CN 114970344 A CN114970344 A CN 114970344A
Authority
CN
China
Prior art keywords
data
model
pressure drop
packed tower
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210576257.6A
Other languages
Chinese (zh)
Inventor
刘毅
朱佳良
邓鸿英
高增梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210576257.6A priority Critical patent/CN114970344A/en
Publication of CN114970344A publication Critical patent/CN114970344A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Marketing (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Biology (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Manufacturing & Machinery (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A method for predicting pressure drop of a packed tower based on width migration learning comprises the following steps: (1) the method comprises the steps of obtaining process data under different spraying densities, obtaining a plurality of groups of packed tower working condition data with different spraying densities, dividing one working condition data into source domains with more label data volumes, and taking the other working conditions as target domains with a small number of label data volumes. (2) And constructing a source domain adaptive model BLS-SDA based on the breadth learning BLS, and putting label data of a source domain and a small amount of label data sets of a target domain into the model for training. (3) And predicting target domain test data by using the trained BLS-SDA model. According to the method, a migration model based on source domain adaptation is established based on a width learning frame, pressure drop variable prediction under different spraying densities is carried out, the method is used for identifying the flooding state of the packed tower, and the modeling prediction effect of target domain data can be improved through the assistance of source domain label data.

Description

Packed tower pressure drop prediction method based on width migration learning
Technical Field
The invention belongs to the field of modeling and online prediction of important parameters of a packed tower, and particularly relates to a packed tower pressure drop prediction method based on width migration learning.
Background
The packed tower is important gas-liquid mass transfer equipment, in the production process, in order to maximize benefit, the efficiency of the packed tower is usually required to be kept to operate near the highest point, however, the efficiency is always flooded near the highest point, once flooded, the efficiency can be rapidly reduced, unqualified products are produced, and even the tower equipment can stop working in severe cases, so that the damage of instruments is caused. In order to ensure that the packed tower stably operates in higher efficiency, whether flooding occurs in the packed tower needs to be identified in real time. The factors influencing the operation process of the packed tower are various, wherein the pressure drop is an important parameter for researching the flooding, the pressure drop can change along with the adjustment of the wind flow velocity when the process is far away from the flooding, and the pressure drop can generate violent irregular fluctuation when the flooding starts, so that whether the flooding occurs or not can be evaluated through the phenomenon.
The important variable of flooding is measured by a hardware sensor to optimize the parameter, and the method based on data driving can predict the key variable through related variables, has the advantages of quick response, low cost and the like, does not need complex background knowledge, can well express the nonlinear relation of the process, and is widely applied. Common soft measurement methods are, for example, Extreme Learning Machine (ELM), Support Vector Machine (SVM), gaussian regression (GPR), neural network, and the like. However, process characteristics under different spray densities vary, and a model established under the current spray density cannot be well applicable to variable prediction under other conditions.
Based on the situation, the invention introduces the transfer learning into the variable prediction field of the flooding state of the reaction packed tower. In the invention, the pressure drop is selected as a key variable, and the pressure drop is predicted by modeling of other variables. The transfer learning is an emerging machine learning method, different distributions of data can be allowed, and useful knowledge is obtained from the field with enough label data to assist modeling of the field to be predicted. Particularly, under the operating condition to be predicted with only a few labels, when the traditional modeling method cannot establish a good prediction model under the condition of a few label data, the information of the related field can be migrated to the current field by the migration learning to assist the modeling of the target field.
Disclosure of Invention
The invention aims at the generation process of a packed tower, and aims at the problem of the identification of the flooding state of the packed tower under different production operation conditions, the invention provides a method for predicting the pressure drop variable of the packed tower based on a source domain adaptive width learning BLS-SDA (hybrid learning system based source domain adaptation) algorithm, and the real-time flooding state is reflected by the pressure drop variable; by introducing the transfer learning, compared with a model established under a single operation condition, the label data under the relevant operation condition can be utilized, and the generalization performance of the model is improved.
The technical scheme of the invention is as follows:
a method for predicting pressure drop of a packed tower based on width migration learning comprises the following steps:
1) data acquisition and processing: selecting variable parameters, acquiring data corresponding to the variable parameters under different working conditions, and processing the data;
2) data set partitioning: selecting data of one working condition as a source domain and data of other working conditions as a target domain from standardized data sets of different working conditions;
3) modeling and training: constructing a width learning BLS-SDA model based on source domain adaptation, and training the constructed model;
4) model prediction: and predicting by a trained BLS-SDA model, and evaluating the model.
Further, in step 1), in order to eliminate the difference of the data in dimension under different working conditions, the data is normalized, and a specific formula is as follows:
Figure BDA0003662200430000031
wherein, X new Is the normalized data, X is the raw data that was not normalized, μ is the mean of the data, and σ is the standard deviation of the data.
Further, in the step 2), the source domain data selects all the label data as a training set, and divides the target domain data set into a training set and a test set, wherein the training set is data with labels, and the test set selects only non-label data to be put into the model for prediction.
Further, the process of step 3) is as follows: constructing a wide learning network BLS-SDA (binary noise-extended data architecture) based on source domain adaptation, and putting label training data of a source domain and a small amount of label training data sets of a target domain into a model for training; in order to prevent the model from being over-fitted, output weight constraint based on an L2 regular term is added into the loss function; the output weights of the network are then solved for based on lagrangian multiplication.
Further, in the step 4), a mean square error RMSE is used as an evaluation index, and a calculation formula is as follows:
Figure BDA0003662200430000032
wherein the content of the first and second substances,
Figure BDA0003662200430000033
representing true data, y i By adopting the technology, the beneficial effects of the method are as follows compared with the prior art: the method introduces transfer learning into pressure drop variable prediction of the flooding of the packed tower for identifying the flooding state, and can solve the problem of poor generalization capability of a model established under a single spraying density under different spraying densities; by using the source domain tag count in cases where only a small number of tags are required for the target domainAccording to the information, the key variable of the target domain is predicted, the prediction performance of the model in the process of crossing working conditions is improved, and the purpose of identifying the flooding state is achieved through the predicted variable value of the pressure drop.
Drawings
FIG. 1 is a data distribution diagram of different working conditions of the present invention
FIG. 2 is a diagram of a width learning architecture of the present invention
FIG. 3 is a diagram of a source domain adaptation-based breadth learning architecture in the present invention
FIG. 4 is a diagram of predicted results of operating conditions 1 to 2 according to an embodiment of the present invention
FIG. 5 is a prediction error map for condition 1 to condition 2, according to an embodiment of the present invention
FIG. 6 is a diagram of predicted results for operating conditions 1 through 3 according to an embodiment of the present invention
FIG. 7 is a prediction error map for condition 1 to condition 3, in accordance with an embodiment of the present invention
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1-7, a method for predicting pressure drop of a packed tower based on width migration learning includes the following steps:
1) data acquisition and processing: because the correlation between different variables is different, the variable with low correlation with the occurrence of flooding can be removed, and only the inlet air flow rate and the inlet water flow rate are selected as the main variables of modeling in the embodiment. In different spraying densities, selectingThe experimental data of three kinds of spraying density are 38m 3 /m 2 ·s、39m 3 /m 2 ·s、40m 3 /m 2 S and respectively recording the working condition 1, the working condition 2 and the working condition 3. And a distribution diagram of data among different working conditions is drawn, and as shown in fig. 1, it can be seen that certain differences exist in data distribution under different operating conditions. In order to eliminate the difference of data in dimension under different working conditions, the data needs to be normalized, and the specific formula is as follows:
Figure BDA0003662200430000051
wherein, X new Is the normalized data, X is the raw data that was not normalized, μ is the mean of the data, and σ is the standard deviation of the data.
2) Dividing a data set: and selecting one working condition as a source domain and other working conditions as target domains from the standardized different working condition data sets. In this embodiment, the working condition 1 is selected as a source domain, and the other working conditions are respectively used as target domains. According to the principle that the data volume of a source domain is far larger than that of a target domain, the source domain data selects all label data as a training set, the target domain data set divides the training set and a test set according to the proportion of 1:9, the training set is labeled data, and the test set selects only unlabeled data to be put into a model for prediction.
3) Modeling and training: the BLS-SDA is a migration learning algorithm based on source domain adaptation, and is mainly constructed based on a breadth learning BLS algorithm. The width learning system is an efficient incremental learning system without a depth structure, only has one hidden layer and consists of feature nodes and enhancement nodes, and the feature nodes and the enhancement nodes are directly connected with an output layer. The weight from the hidden layer to the output layer is calculated by a ridge regression algorithm, and the structure diagram is shown in fig. 2, where the BLS algorithm can be expressed as:
Z i =φ i (XW eiei ),i=1,2,3,...,n
wherein X represents input data of the network, Z i Represents the learned features of the ith group of feature layers, phi i Representing a feature transformation, W ei And beta ei All are randomly generated weight and bias, n represents the group number of the feature layer, and the extracted features have randomness, so the sparse self-encoder is used for adjusting the input weight of the feature layer, and then the extracted features of the feature layer are obtained. The total feature extracted from different feature layers is Z n
Z n =[Z 1 ,Z 2 ...Z n ]
After the features are extracted by the feature layer, the features are then extracted again by the enhancement layer.
Figure BDA0003662200430000061
Therein, ζ j
Figure BDA0003662200430000062
Respectively enhancing the activation function, random weight and bias of the nodes, wherein m represents the number of the enhanced nodes, and the total extracted characteristics of the enhanced nodes are H m
H m =[H 1 ,...,H m ]
And combining the total features extracted by the feature nodes and the enhanced nodes to obtain the output feature G of the hidden layer.
G=[Z n |H m ]
Wherein G ∈ R N×(nq+m) Wherein N represents the number of samples, q represents the number of nodes of each feature layer, then β represents the weight between the hidden layer and the output layer, and in order to reduce the complexity of the model, β is subjected to the constraint of a two-norm, and a specific loss function is as follows:
Figure BDA0003662200430000071
s.t.G(x i )β=y ii ,i=1,2,...,N
wherein ξ i 、G(x i )、y i Are respectively the ith training sample x i The prediction error, the learned features of the hidden layer and the output label, and gamma is a penalty parameter of the model.
Based on the BLS algorithm, the BLS-SDA carries out modeling by learning a large amount of labeled source domain data and a small amount of labeled target domain data information, can migrate the data information of the source domain into the data of the target domain, and establishes a good model with the assistance of the source domain data. Based on the BLS, the BLS-SDA puts the source domain training data and the target domain training data into the model together, and uses the prediction loss of the source domain data and the prediction loss of the target domain data as the constraint of the model together, where the structure diagram is shown in fig. 3, and the optimization objective function l of the BLS-SDA BLS-SDA As shown in the following formula.
Figure BDA0003662200430000072
Figure BDA0003662200430000073
Where the subscript s denotes the source domain, t denotes the target domain,
Figure BDA0003662200430000074
respectively a hidden layer output, a model prediction error and a real label of the ith sample of the source domain,
Figure BDA0003662200430000075
respectively outputting a hidden layer, a model prediction error and a real label of a jth sample of the target domain; n is a radical of s And N t Respectively representing the training sample number of the source domain data and the training sample number of the target domain data; beta is the output weight of the network; lambda [ alpha ] s And λ t And respectively the prediction error penalty factors of the source domain training sample and the target domain training sample.
After the output weight is calculated, when the number of the training samples is larger than that of the neurons in the hidden layer, the final output weight w of the model is solved based on Lagrange multiplication, and is as follows:
w=(I+λ s Η s T H st H t T H t ) -1 (H s T y s +H s T y t )
and when the number of training samples is less than the number of hidden layer neurons, the output weight is:
Figure BDA0003662200430000081
wherein I is an identity matrix, in which
Figure BDA0003662200430000082
Figure BDA0003662200430000083
4) Model prediction: the method comprises the steps of predicting test data of a target domain by using a trained BLS-SDA model, and comparing a prediction result with a true value of the test data, wherein a mean square error (RMSE) is used as an evaluation index, and a calculation formula is as follows:
Figure BDA0003662200430000084
wherein the content of the first and second substances,
Figure BDA0003662200430000085
representing true data, y i Represents the output of the model, and k represents the number of samples of the test set.
Table 1 shows the prediction results of the BLS and BLS-SDA algorithms, and it can be seen from the results that the migration learning method has better prediction advantages in prediction loss than the conventional modeling. The prediction result maps are shown in fig. 4 and 6, and the prediction error maps are shown in fig. 5 and 7. As can be seen from the prediction error map, the prediction errors of the two models are increased from the beginning to the end of the sample as a whole, mainly because the flooding does not occur in the initial stage, the pressure drop is relatively stable, and once the flooding occurs, the pressure drop has irregular jitter, so that the correlation between the pressure drop and the process variable is reduced. And when no flooding and flooding occur, the migration method based on width learning has smaller predicted error of the pressure drop variable than the predicted result based on the width learning algorithm on the whole. When the target domain is supposed to only have a small amount of label data, the modeling prediction effect of the target domain data is improved through the assistance of the label data of the source domain.
The method introduces the transfer learning into the pressure drop variable prediction of the flooding process of the packed tower, is used for identifying the flooding state, can not well predict other working conditions by a model established under a single working condition aiming at the data difference among different working condition data, can predict the key variable of a target domain under the condition that the target domain only needs a small number of labels by the aid of the label data of the source domain in the transfer learning, and improves the pressure drop variable prediction effect of the flooding state of the packed tower compared with the traditional model.
TABLE 1BLS-SDA and BLS results on packed column pressure drop variable prediction
Figure BDA0003662200430000091
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (5)

1. A method for predicting pressure drop of a packed tower based on width migration learning is characterized by comprising the following steps:
1) data acquisition and processing: selecting variable parameters, acquiring data corresponding to the variable parameters under different working conditions, and processing the data;
2) data set partitioning: selecting data of one working condition as a source domain and data of other working conditions as a target domain from standardized data sets of different working conditions;
3) modeling and training: constructing a width learning BLS-SDA model based on source domain adaptation, and training the constructed model;
4) model prediction: and predicting by a trained BLS-SDA model, and evaluating the model.
2. The method for predicting the pressure drop of the packed tower based on the width migration learning as claimed in claim 1, wherein in the step 1), in order to eliminate the difference of the data in dimension under different working conditions, the data is normalized, and the specific formula is as follows:
Figure FDA0003662200420000011
wherein, X new Is the normalized data, X is the raw data that was not normalized, μ is the mean of the data, and σ is the standard deviation of the data.
3. The method as claimed in claim 2, wherein in step 2), the source domain data selects all the labeled data as a training set, and the target domain data set is divided into a training set and a test set, the training set is labeled data, and the test set selects only unlabeled data to be put into the model for prediction.
4. The method for predicting the pressure drop of the packed tower based on the width migration learning as claimed in claim 3, wherein the step 3) is implemented as follows: constructing a wide learning network BLS-SDA (binary noise-extended data architecture) based on source domain adaptation, and putting label training data of a source domain and a small amount of label training data sets of a target domain into a model for training; in order to prevent the model from being over-fitted, output weight constraint based on an L2 regular term is added into the loss function; the output weights of the network are then solved for based on lagrangian multiplication.
5. The method for predicting the pressure drop of the packed tower based on the width transfer learning as claimed in claim 4, wherein in the step 4), a mean square error (RMSE) is used as an evaluation index, and a calculation formula is as follows:
Figure FDA0003662200420000021
wherein the content of the first and second substances,
Figure FDA0003662200420000022
representing true data, y i Representing the output of the model and k representing the number of samples of the test set.
CN202210576257.6A 2022-05-25 2022-05-25 Packed tower pressure drop prediction method based on width migration learning Pending CN114970344A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210576257.6A CN114970344A (en) 2022-05-25 2022-05-25 Packed tower pressure drop prediction method based on width migration learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210576257.6A CN114970344A (en) 2022-05-25 2022-05-25 Packed tower pressure drop prediction method based on width migration learning

Publications (1)

Publication Number Publication Date
CN114970344A true CN114970344A (en) 2022-08-30

Family

ID=82956084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210576257.6A Pending CN114970344A (en) 2022-05-25 2022-05-25 Packed tower pressure drop prediction method based on width migration learning

Country Status (1)

Country Link
CN (1) CN114970344A (en)

Similar Documents

Publication Publication Date Title
CN106448151B (en) Short-term traffic flow prediction method
CN110580496A (en) Deep migration learning system and method based on entropy minimization
JP2022500769A (en) Power system heat load prediction method and prediction device
Peng et al. Accelerating minibatch stochastic gradient descent using typicality sampling
CN108920812B (en) Machining surface roughness prediction method
CN111967294A (en) Unsupervised domain self-adaptive pedestrian re-identification method
CN112989635B (en) Integrated learning soft measurement modeling method based on self-encoder diversity generation mechanism
CN109581339B (en) Sonar identification method based on automatic adjustment self-coding network of brainstorming storm
CN109143408B (en) Dynamic region combined short-time rainfall forecasting method based on MLP
CN111861013A (en) Power load prediction method and device
CN111666406A (en) Short text classification prediction method based on word and label combination of self-attention
CN111768000A (en) Industrial process data modeling method for online adaptive fine-tuning deep learning
CN105335619A (en) Collaborative optimization method applicable to parameter back analysis of high calculation cost numerical calculation model
Suryo et al. Improved time series prediction using LSTM neural network for smart agriculture application
Dai et al. Hybrid deep model for human behavior understanding on industrial internet of video things
Jia et al. Water quality prediction method based on LSTM-BP
Abdelaziz et al. Convolutional neural network with genetic algorithm for predicting energy consumption in public buildings
Wang et al. Optimizing of SVM with hybrid PSO and genetic algorithm in power load forecasting
CN110738363A (en) photovoltaic power generation power prediction model and construction method and application thereof
Yang Combination forecast of economic chaos based on improved genetic algorithm
CN115761654B (en) Vehicle re-identification method
CN114970344A (en) Packed tower pressure drop prediction method based on width migration learning
CN115394381A (en) High-entropy alloy hardness prediction method and device based on machine learning and two-step data expansion
CN115689639A (en) Commercial advertisement click rate prediction method based on deep learning
Sun et al. Aledar: An attentions-based encoder-decoder and autoregressive model for workload forecasting of cloud data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination