CN113031520B - Meta-invariant feature space learning method for cross-domain prediction - Google Patents

Meta-invariant feature space learning method for cross-domain prediction Download PDF

Info

Publication number
CN113031520B
CN113031520B CN202110228766.5A CN202110228766A CN113031520B CN 113031520 B CN113031520 B CN 113031520B CN 202110228766 A CN202110228766 A CN 202110228766A CN 113031520 B CN113031520 B CN 113031520B
Authority
CN
China
Prior art keywords
feature space
invariant feature
learning
model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110228766.5A
Other languages
Chinese (zh)
Other versions
CN113031520A (en
Inventor
李迎光
刘长青
华家玘
李晶晶
郝小忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110228766.5A priority Critical patent/CN113031520B/en
Publication of CN113031520A publication Critical patent/CN113031520A/en
Application granted granted Critical
Publication of CN113031520B publication Critical patent/CN113031520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/406Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
    • G05B19/4065Monitoring tool breakage, life or condition
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/37Measurements
    • G05B2219/37232Wear, breakage detection derived from tailstock, headstock or rest

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

A meta-invariant feature space learning method of cross-domain prediction is characterized in that existing data are used as source domain data, and the source domain data are grouped and paired; respectively establishing a prediction model for the data in each pairing, further establishing an invariant feature space learning model of the pairing data, and learning the invariant feature space of each pairing data through collaborative training; and (3) learning the element invariant feature space between different pairs by using the element invariant feature space learning model as a base model through an element learning method to obtain an element invariant feature space learning model, and predicting the target domain based on the element invariant feature space learning model. The invention obtains the invariant feature space under two working conditions by a collaborative learning mode, and solves the problem of edge distribution adaptation. The prediction precision under the cross-domain is improved.

Description

Meta-invariant feature space learning method for cross-domain prediction
Technical Field
The invention relates to the field of artificial intelligence, in particular to a cross-domain prediction method for intelligent manufacturing, and specifically relates to a meta-invariant feature space learning method for cross-domain prediction.
Background
The cross-domain prediction is an important research problem in the field of machine learning, the main reason in the manufacturing field is that the edge distribution and condition distribution of data have large difference due to large change of working conditions, the new working conditions are difficult to adapt to through a model trained on a source domain, and a typical problem in the manufacturing field is tool wear prediction. The real-time monitoring of the tool abrasion has important significance on the dynamic control of the machining process, especially for the complex parts of the airplane using a large amount of difficult-to-machine materials, the tool abrasion seriously affects the machining quality of the parts in the machining process, and the tool abrasion is very difficult to predict. The data driving method autonomously learns a data driving model from a large amount of processing data, can be equivalent to a complex mechanism model within a certain error range, and provides a new idea for accurately predicting the wear of the cutter. Deep learning is a typical data driving method, and tool wear is still difficult to accurately monitor under the condition of continuous change of working conditions at present because training of deep learning needs a large amount of labeled sample data under different working conditions, and the sample data is very difficult to obtain in the actual processing process. The latest research realizes the quantitative prediction of the tool wear amount under the variable working conditions based on the small sample multi-task learning method such as meta-learning, but is only suitable for the small working condition conditions such as cutting parameter change and the like, and cannot realize the accurate prediction of the tool wear under the large working condition changes such as large cutting parameter change, tool diameter change, tool material change, part material change and the like.
In the field of machine learning, the joint probability distribution P (X, Y) of input and output under cross-domain is obtained by model learning essentially, and according to Bayesian theorem, the prediction precision and adaptability of the model are embodied in two aspects of edge distribution P (X) and conditional distribution P (Y | X). Here, the edge distribution P (X) is a data distribution of the input amount, and the conditional distribution P (Y | X) is a parameter distribution of the model. In the cross-domain prediction problem, the edge distribution and the condition distribution have large gaps. A common solution to this type of Distribution Adaptation (Distribution Adaptation) problem is to use migration learning, but most of the migration learning methods focus on solving the problem of different single edge distributions or different single condition distributions, and have a certain sample size requirement for target data, and there is a limitation to the above problem. The patent provides a cross-domain prediction meta-invariant feature space learning method, which respectively performs distribution adaptation from two aspects of data edge distribution and model condition distribution, thereby realizing cross-domain accurate prediction.
Disclosure of Invention
The invention aims to provide a meta-invariant feature space learning method aiming at the problem of cross-domain prediction.
The technical scheme of the invention is as follows:
a meta-invariant feature space learning method for cross-domain predictionCharacterized in that: taking the existing data as the source domain data, grouping the source domain data, and recording the jth data as DjFurther, the grouped data are paired, and the ith paired data is denoted as (D)j,Dk)i(ii) a Each pairing is respectively for DjAnd DkEstablishing a prediction model
Figure GDA0003446283020000021
And
Figure GDA0003446283020000022
further constructing an invariant feature space learning model of the ith pairing data
Figure GDA0003446283020000023
Learning an invariant feature space of each paired data through collaborative training; model learning in invariant feature space
Figure GDA0003446283020000024
For the basic model, learning the element invariant feature space between different pairs by an element learning method to obtain an element invariant feature space learning model fΦAnd predicting the target domain based on the meta-invariant feature space learning model.
Further, the grouping method for the source domain data groups the source domain data under a specific distribution into a group.
Further, the pairing method selects two groups of data with the minimum distribution distance for pairing by measuring the distance of data distribution in different groups, and the preferred distribution measurement method is the maximum mean difference, namely MMD.
Further, the invariant feature space learning model
Figure GDA0003446283020000025
Including predictive models
Figure GDA0003446283020000026
And
Figure GDA0003446283020000027
the two prediction models can be constructed through a neural network, input quantities under different groups are input, a prediction target quantity and hidden variables are output, and a loss function L is constructed:
Figure GDA0003446283020000028
wherein L isMIs the loss of match of the hidden variables of the two predictive models,
Figure GDA0003446283020000029
and
Figure GDA00034462830200000210
respectively the reconstruction loss of the input quantities of the two prediction models,
Figure GDA00034462830200000211
and
Figure GDA00034462830200000212
respectively, are the loss of prediction output for both prediction models.
Further, the meta-invariant feature space learning model fΦThe method comprises learning the change rule from multiple invariant feature spaces by meta-learning method, wherein the parameter of meta-learner is recorded as phi, and the parameter of base model is recorded as thetaiPhi and thetaiIteratively updating by gradient descent:
Figure GDA00034462830200000213
Figure GDA00034462830200000214
where the learning rates alpha and beta are fixed hyper-parameters,
Figure GDA00034462830200000215
is shown asThe gradient of the loss function of the variable feature space model,
Figure GDA00034462830200000216
a loss function representing the ith task,
Figure GDA00034462830200000217
representing the ith invariant feature space model,
Figure GDA00034462830200000218
gradient, T, representing a loss function of the meta-invariant feature space modeliDenotes the ith learning task, p denotes the distribution of the learning tasks, and T denotes the learning task.
Further, the target domain is predicted, the target domain data is paired with a group of existing source domain data, and the target domain is predicted based on the meta-invariant feature space learning model. The optimization method comprises the following steps of finely adjusting the meta-invariant feature space learning model through target domain data and source domain data selected to be matched:
Figure GDA00034462830200000219
thereby obtaining a prediction model of the target
Figure GDA0003446283020000031
In the formula [ theta ]newThe parameters representing the prediction model of the object,
Figure GDA0003446283020000032
representing the target prediction task.
The invention has the beneficial effects that:
1. the invention obtains the invariant feature space under two working conditions by a collaborative learning mode, and solves the problem of edge distribution adaptation.
2. The invention utilizes the meta-learning idea to learn the change rule of the invariant feature space to obtain the meta-invariant feature space, thereby solving the problem of condition distribution adaptation.
3. The invention uses the element invariant feature space model, and improves the prediction precision under the cross-domain.
Drawings
FIG. 1 is a schematic diagram of a meta-invariant feature space learning method of the present invention, in which TP represents a task pair, Condi represents a working condition, SNA and SNB represent two sub-networks, IFS represents an invariant feature space learning model, and MIFS represents a meta-invariant feature space learning model.
FIG. 2 is a schematic diagram of the invariant feature space model of the present invention, wherein X isSAnd XTRespectively representing the input quantities under different groups,
Figure GDA0003446283020000033
and
Figure GDA0003446283020000034
respectively representing the output predicted target amounts, YSAnd YTLabels indicating output predicted target amounts, Z, respectivelySAnd ZTRepresents an implicit variable, LMIs the loss of match of the hidden variables Z of the two predictive models,
Figure GDA0003446283020000035
and
Figure GDA0003446283020000036
input quantities X of two prediction models respectivelySAnd XTThe reconstruction of (a) is lost,
Figure GDA0003446283020000037
and
Figure GDA0003446283020000038
respectively the prediction outputs Y of the two prediction modelsSAnd YTIs lost. EncSAnd EncTCoding networks, Dec, representing two sub-networks, respectivelySAnd DecTDecoding networks, FC, representing two sub-networks respectivelySAnd FCTRespectively representing the predicted networks of the two sub-networks.
Detailed Description
The invention will be further described with reference to the drawings and examples, to which the invention is not restricted.
As shown in fig. 1-2.
A cross-domain prediction meta-invariant feature space learning method takes numerical control machining cutter wear prediction as an example, the cross-domain prediction is realized by predicting cutter wear under variable working conditions, wherein the variable working conditions refer to changes of workpiece materials, cutter sizes or materials, cutting parameters and the like, model input is monitoring signal features, and output is cutter wear. The method comprises the following specific steps:
1. firstly, aiming at specific data distribution under specific working conditions, data under one working condition is divided into a group, and data pairing is carried out. The invention adopts Maximum Mean Difference (MMD) to measure the distance of data distribution in two subdomains, and selects two types of distribution with the minimum MMD distance for pairing.
2. On the variable working condition cutter wear data set, pairwise matching is carried out on the signal characteristic data based on the MMD method, and the matching strategy of the training set is as follows: firstly, in working conditions Condi _1-9, randomly selecting one working condition Condi _1 as a working condition C1 to be paired, and selecting the working condition with the minimum MMD distance from the working condition C1 from the remaining 8 working conditions as a pairing working condition C2; then, the working condition C2 is used as a working condition to be paired, and the working condition with the minimum MMD distance from the working condition C2 is selected from the remaining 7 working conditions and used as a pairing working condition C3; and repeating the steps until all the training set working conditions are matched. The pairing strategy of the test set is: and for each test working condition, selecting a working condition closest to the current test working condition MMD from the training set as a pairing working condition.
3. Establishing an invariant feature space model, converting features under different distributions into an invariant feature space by utilizing collaborative learning of a prediction model under paired working conditions, extracting common features of data under different working conditions, and laying a foundation for obtaining an internal rule of the model by meta-learning. The invariant feature space model architecture is shown in fig. 2, and comprises two sub-networks SNA and SNB, wherein the longitudinal direction is a coding and decoding module for inputting features under two working conditions, and the transverse direction is a tool wear prediction module under two working conditions.
In the invariant feature space model, EncSAnd EncTCoding layers, Dec, of sub-networks SNA and SNB, respectivelySAnd DecTDecoding layers, FC, of sub-networks SNA and SNB, respectivelySAnd FCTAre the regression layers of sub-networks SNA and SNB, respectively. X is the input vector, Y is the tag vector,
Figure GDA0003446283020000041
is the output vector of the model and Z is the hidden vector. L isMIs the loss of match of the hidden vectors of the two sub-networks,
Figure GDA0003446283020000042
and
Figure GDA0003446283020000043
the reconstruction loss of sub-networks SNA and SNB respectively,
Figure GDA0003446283020000044
and
Figure GDA0003446283020000045
the predicted loss of tool wear for sub-networks SNA and SNB, respectively.
Data set for given source domain and target domain
Figure GDA0003446283020000046
And
Figure GDA0003446283020000047
the two sub-domains execute task pairs under similar working conditions, and the data distribution of the two sub-domains can be considered to be subject to the same distribution which is respectively represented by thetas,θTParameterized embedded network
Figure GDA0003446283020000048
Figure GDA0003446283020000049
Will input features
Figure GDA00034462830200000410
Figure GDA00034462830200000411
Encoding into latent variables in invariant feature space Z
Figure GDA00034462830200000412
Figure GDA00034462830200000413
Invariant feature space model
Figure GDA00034462830200000414
The loss function of (2) consists of three parts: matching loss LMLoss of reconstitution
Figure GDA00034462830200000415
And
Figure GDA00034462830200000416
and predicting loss
Figure GDA00034462830200000417
And
Figure GDA00034462830200000418
match loss (match loss):
Figure GDA00034462830200000419
wherein the content of the first and second substances,
Figure GDA00034462830200000420
Figure GDA00034462830200000421
Figure GDA00034462830200000422
the cosine distance is here chosen as a distance measure for the two subdomain latent variables.
Loss of reconstruction (reconstruction loss):
Figure GDA00034462830200000423
Figure GDA00034462830200000424
wherein the content of the first and second substances,
Figure GDA00034462830200000425
Figure GDA00034462830200000426
prediction loss (prediction loss):
Figure GDA0003446283020000051
Figure GDA0003446283020000052
wherein the content of the first and second substances,
Figure GDA0003446283020000053
Figure GDA0003446283020000054
Figure GDA0003446283020000055
Figure GDA0003446283020000056
Figure GDA0003446283020000057
loss function (loss function):
Figure GDA0003446283020000058
and minimizing the loss function to obtain the maximum invariant feature space under the paired working conditions, wherein the potential features under the invariant feature space can be used for predicting the tool wear amount. Model of invariant feature space
Figure GDA0003446283020000059
Parameter (d) of
Figure GDA00034462830200000510
Update by gradient descent:
Figure GDA00034462830200000511
the objective function of the invariant feature space model optimization is:
Figure GDA00034462830200000512
wherein the content of the first and second substances,
Figure GDA00034462830200000513
is an initialization parameter and the learning rate alpha is a fixed hyper-parameter.
Figure GDA00034462830200000514
Representing the gradient of the invariant feature space model loss function.
4. And learning the change rule of the invariant feature space IFS from a plurality of tasks by using a meta-learning method, and constructing a meta-invariant feature space MIFS. The MAML framework is selected to realize the meta-learning process. The meta-invariant feature space learning method is shown in fig. 1.
The meta learner is composed of parameters
Figure GDA00034462830200000515
A parameterized MIFS model, a basis learner
Figure GDA00034462830200000516
Parameterized invariant feature space IFS model
Figure GDA00034462830200000517
θ is a parameter of the embedding network f and ρ is a parameter of the regression network g. Learner parameters
Figure GDA00034462830200000518
Update by gradient descent:
Figure GDA00034462830200000519
wherein the base learning rate alpha is a fixed hyper-parameter,
Figure GDA00034462830200000520
is a loss function of the base learner,
Figure GDA00034462830200000521
initial network parameters of a base learner representing gradients of a constant feature space model loss function
Figure GDA00034462830200000522
Is also a parameter of the meta-learner, which is still updated by gradient descent based on the learner parameter:
Figure GDA00034462830200000523
wherein the meta-learning rate beta is a fixed hyper-parameter,
Figure GDA00034462830200000524
gradient, T, representing a loss function of the meta-invariant feature space modeliDenotes the ith learning task, p denotes the distribution of the learning tasks, and T denotes the learning task. The meta-optimization objective function of the meta-invariant feature space is:
Figure GDA0003446283020000061
5. the meta-learner is optimized to find the best learner initialization parameters. Model in the face of a new task TnewFirstly, the method is matched with the existing working condition, and a small amount of data of a new working condition and data of a matched working condition are utilized to carry out pairing on the element parameters through a small amount of random gradient Steps (SGD)
Figure GDA0003446283020000062
Fine tuning is carried out to obtain parameters of the base learner
Figure GDA0003446283020000063
Figure GDA0003446283020000064
Namely, the prediction model under the new working condition can continuously use the base learner to accurately predict the tool wear amount. In the formula
Figure GDA0003446283020000065
The parameters of the tool wear prediction model representing the new operating conditions,
Figure GDA0003446283020000066
indicating the tool wear prediction task for the new operating condition.
The present invention is not concerned with parts which are the same as or can be implemented using prior art techniques.

Claims (4)

1. A meta-invariant feature space learning method of cross-domain prediction is characterized by comprising the following steps: taking the abrasion data of the existing numerical control machining cutter as source domain data, grouping the source domain data, and recording the jth group of data as DjFurther, the grouped data are paired, and the ith paired data is denoted as (D)j,Dk)iFor each pairing DjAnd DkEstablishing a prediction model
Figure FDA0003446283010000011
And
Figure FDA0003446283010000012
further constructing an invariant feature space learning model of the ith pairing data
Figure FDA0003446283010000013
Learning an invariant feature space of each paired data through collaborative training; model learning in invariant feature space
Figure FDA0003446283010000014
For the basic model, learning the element invariant feature space between different pairs by an element learning method to obtain an element invariant feature space learning model fφPredicting the target domain based on the meta-invariant feature space learning model; the invariant feature space learning model
Figure FDA0003446283010000015
Including predictive models
Figure FDA0003446283010000016
And
Figure FDA0003446283010000017
the two prediction models are constructed through a neural network and input the input quantity X under different groupsSAnd XTOutputs the predicted target amount YSAnd YTAnd an implicit variable Z, constructing a loss function L:
Figure FDA0003446283010000018
wherein L isMIs the loss of match of the hidden variables Z of the two predictive models,
Figure FDA0003446283010000019
and
Figure FDA00034462830100000110
input quantities X of two prediction models respectivelySAnd XTThe reconstruction of (a) is lost,
Figure FDA00034462830100000111
and
Figure FDA00034462830100000112
respectively the prediction outputs Y of the two prediction modelsSAnd YTIs lost.
2. The meta-invariant feature space learning method for cross-domain prediction according to claim 1, wherein: the grouping method of the source domain data refers to grouping the source domain data under a specific distribution.
3. The meta-invariant feature space learning method for cross-domain prediction according to claim 1, wherein: the matching method is to select two groups of data with the minimum distribution distance for matching by measuring the distance of data distribution in different groups, and the distribution measurement method is the maximum mean difference.
4. The meta-invariant feature space learning method for cross-domain prediction according to claim 1, wherein: the element invariant feature space learning model fΦThe method comprises learning the change rule from multiple invariant feature spaces by meta-learning method, wherein the parameter of meta-learner is recorded as phi, and the parameter of base model is recorded as thetaiPhi and thetaiIteratively updating by gradient descent:
Figure FDA00034462830100000113
Figure FDA00034462830100000114
in the formula: alpha and beta are learning rate parameters, are fixed hyper-parameters,
Figure FDA00034462830100000115
representing the gradient of the invariant feature space model penalty function,
Figure FDA00034462830100000116
a loss function representing the ith task,
Figure FDA00034462830100000117
representing the ith invariant feature space model,
Figure FDA00034462830100000118
gradient, T, representing a loss function of the meta-invariant feature space modeliRepresenting the ith learning task, p representing the distribution of the learning tasks, and T representing the learning tasks;
the target domain prediction is that target domain data and a group of existing source domain data are paired, and the target domain is predicted based on a meta-invariant feature space learning model; fine-tuning the meta-invariant feature space learning model through the target domain data and the source domain data selected for pairing:
Figure FDA0003446283010000021
thereby obtaining a prediction model of the target
Figure FDA0003446283010000022
In the formula [ theta ]newThe parameters representing the prediction model of the object,
Figure FDA0003446283010000023
representing the target prediction task.
CN202110228766.5A 2021-03-02 2021-03-02 Meta-invariant feature space learning method for cross-domain prediction Active CN113031520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110228766.5A CN113031520B (en) 2021-03-02 2021-03-02 Meta-invariant feature space learning method for cross-domain prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110228766.5A CN113031520B (en) 2021-03-02 2021-03-02 Meta-invariant feature space learning method for cross-domain prediction

Publications (2)

Publication Number Publication Date
CN113031520A CN113031520A (en) 2021-06-25
CN113031520B true CN113031520B (en) 2022-03-22

Family

ID=76465312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110228766.5A Active CN113031520B (en) 2021-03-02 2021-03-02 Meta-invariant feature space learning method for cross-domain prediction

Country Status (1)

Country Link
CN (1) CN113031520B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032646A (en) * 2019-05-08 2019-07-19 山西财经大学 The cross-domain texts sensibility classification method of combination learning is adapted to based on multi-source field
CN111199458A (en) * 2019-12-30 2020-05-26 北京航空航天大学 Recommendation system based on meta-learning and reinforcement learning
CN111813869A (en) * 2020-08-21 2020-10-23 支付宝(杭州)信息技术有限公司 Distributed data-based multi-task model training method and system
CN112257868A (en) * 2020-09-25 2021-01-22 建信金融科技有限责任公司 Method and device for constructing and training integrated prediction model for predicting passenger flow

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032646A (en) * 2019-05-08 2019-07-19 山西财经大学 The cross-domain texts sensibility classification method of combination learning is adapted to based on multi-source field
CN111199458A (en) * 2019-12-30 2020-05-26 北京航空航天大学 Recommendation system based on meta-learning and reinforcement learning
CN111813869A (en) * 2020-08-21 2020-10-23 支付宝(杭州)信息技术有限公司 Distributed data-based multi-task model training method and system
CN112257868A (en) * 2020-09-25 2021-01-22 建信金融科技有限责任公司 Method and device for constructing and training integrated prediction model for predicting passenger flow

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Meta-tag Propagation by Co-training an Ensemble Classifier for Improving Image Search Relevance;Aayush Sharma et al.;《2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 》;20080628;第1-6页 *
基于强化学习协同训练的命名实体识别方法;程钟慧 等;《软件工程》;20200131;第7-11页 *

Also Published As

Publication number Publication date
CN113031520A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN109919356B (en) BP neural network-based interval water demand prediction method
CN109828532A (en) A kind of Prediction of Surface Roughness method and process parameter optimizing method based on GA-GBRT
CN110824915B (en) GA-DBN network-based intelligent monitoring method and system for wastewater treatment
CN110083125B (en) Machine tool thermal error modeling method based on deep learning
CN115422814B (en) Digital twin-driven closed-loop optimization design method for complex electromechanical product
CN112364560B (en) Intelligent prediction method for working hours of mine rock drilling equipment
Lermer et al. Creation of digital twins by combining fuzzy rules with artificial neural networks
CN103823430B (en) Intelligence weighting propylene polymerization production process optimal soft measuring system and method
KR20220032599A (en) Methods for Determining Process Variables in Cell Culture Processes
CN114862035B (en) Combined bay water temperature prediction method based on transfer learning
CN110245398B (en) Soft measurement deep learning method for thermal deformation of air preheater rotor
CN113031520B (en) Meta-invariant feature space learning method for cross-domain prediction
CN113051806B (en) Water quality BOD measurement method based on AQPSO-RBF neural network
Varaprasad et al. Stock Price Prediction using Machine Learning
Vogt et al. Wind power forecasting based on deep neural networks and transfer learning
CN117289652A (en) Numerical control machine tool spindle thermal error modeling method based on multi-universe optimization
Abusnaina et al. Enhanced MWO training algorithm to improve classification accuracy of artificial neural networks
Zhang et al. Milling force prediction of titanium alloy based on support vector machine and ant colony optimization
CN113095466A (en) Algorithm of satisfiability model theoretical solver based on meta-learning model
Chen et al. The Application of Adaptive Generalized NGBM (1, 1) To Sales Forecasting: A Case Study of an Underwear Shop.
Wang et al. Production quality prediction of cross-specification products using dynamic deep transfer learning network
CN115034504B (en) Cutter wear state prediction system and method based on cloud edge cooperative training
CN103838143B (en) Multi-modal global optimum propylene polymerization production process optimal soft measuring system and method
Gao et al. Transfer State Estimator for Markovian Jump Linear Systems With Multirate Measurements
Huang et al. Combining Virtual Sample Generation Based Data Enhancement and Multi-objective Optimization Based Selective Ensemble for Soft Sensor Modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant