CN115659807A - Method for predicting talent performance based on Bayesian optimization model fusion algorithm - Google Patents

Method for predicting talent performance based on Bayesian optimization model fusion algorithm Download PDF

Info

Publication number
CN115659807A
CN115659807A CN202211327998.7A CN202211327998A CN115659807A CN 115659807 A CN115659807 A CN 115659807A CN 202211327998 A CN202211327998 A CN 202211327998A CN 115659807 A CN115659807 A CN 115659807A
Authority
CN
China
Prior art keywords
model
data
talent
bayesian optimization
weak
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211327998.7A
Other languages
Chinese (zh)
Inventor
章慧
潘皓越
张苏
徐嘉怡
王敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202211327998.7A priority Critical patent/CN115659807A/en
Publication of CN115659807A publication Critical patent/CN115659807A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for predicting talent performance based on a Bayesian optimization model fusion algorithm, which comprises the steps of preprocessing the relevant data of talents acquired in advance and dividing the data into a training set and a test set; constructing a random forest model, and using Bayesian optimization to adjust parameters to pre-train a data set to obtain an optimized random forest model and model parameters; constructing a GBDT model and an XGboost model on the basis of the models and the parameters, and obtaining optimized GBDT and XBGoost models through Bayesian optimization parameter adjustment; and taking the three optimized models as base models, and learning through a Stacking algorithm to obtain a final talent expression prediction model. According to the invention, the mainstream Bagging and Boosting algorithms and the latest Stacking algorithm are effectively combined, and the talent performance is predicted based on the relevant data of the talents, so that the accuracy of prediction is improved.

Description

Method for predicting talent performance based on Bayesian optimization model fusion algorithm
Technical Field
The invention belongs to the field of education data mining, and particularly relates to a method for predicting talent performance based on a Bayesian optimization model fusion algorithm.
Background
Talent development and informatization are tightly combined, talent development big data is built, data intercommunication is achieved, talent service is innovated and optimized, a data basis is provided for talent communication cooperation development and talent service level improvement, and the method is a key for making current talent development work. Based on data mining, data intercommunication service fusion is realized, a talent prediction model is constructed by using an integrated learning algorithm, and the problem of talent development in the big data era is solved fundamentally, so that the goal of talent resource optimal configuration is achieved.
Model fusion (model ensemble), also known as ensemble learning, is often used in the field of machine learning to combine multiple weak models to improve the performance of the models. In some cases, after a plurality of weak models with poor performance are fused by the models, the performance may be greatly improved.
Bagging (also known as self-sampling method or Bagging method) is a popular integration method, which adopts parallel integration, each model is independently built, a data set used by each model is randomly put back and sampled from a training set, each model is trained respectively, and a result is generated by adopting a voting method. The parallel integration characteristic of Bagging enables the basic model to be trained in parallel, and the efficiency is greatly improved.
Boosting is a sequential integrated iterative algorithm, each model is further trained on the basis of the error classification of the previous model, and the core idea is to concentrate on predicting wrong samples and realize 'weighted voting' by assigning larger weight to the wrong samples of the previous model. The representative model of Boosting idea is AdaBoost algorithm, which adds a new weak classifier in each round, if the sample has been correctly classified in the round, the probability of selecting it in the next round will be reduced; if instead a sample point is not accurately classified, its weight is increased and the training process is iteratively performed until the overall data set reaches a predetermined acceptable error rate range.
The model can well realize prediction in the related field, but the model is single, the performance of the base estimator is poor, overfitting is easy to form, and the prediction effect is not ideal.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a method for predicting talent performance based on a Bayesian optimization model fusion algorithm, which can improve the accuracy of prediction.
The technical scheme is as follows: the invention provides a method for predicting talent performance based on a Bayesian optimization model fusion algorithm, which specifically comprises the following steps:
(1) Preprocessing acquired talent related data, and dividing the data into a training set and a testing set;
(2) Constructing a random forest model, and using Bayesian optimization to adjust parameters to pre-train a data set to obtain an optimized random forest model and model parameters;
(3) Constructing a GBDT model and an XGboost model based on the model and parameters in the step (2), and obtaining the optimized GBDT and XBGoost model through Bayesian optimization parameter adjustment;
(4) And taking the three optimized models as base models, and learning through a Stacking algorithm to obtain a final talent expression prediction model.
Further, the step (1) is realized as follows:
(11) Reading a data set, cleaning the data, eliminating missing values and abnormal values, and acquiring tempData and a feature list columns of the temporary data set;
(12) Encoding the discrete character string variables, traversing the feature list, defining attr as the attribute of the currently traversed feature list, tempData attr Current feature column data;
(13) Judging tempData attr Whether it is a discrete string variable or not, and if so, for tempData attr Coding is carried out, the character string variables are converted into numerical value variables, and preprocessed data sets are obtained;
(14) Dividing the data to generate a training set and a test set;
(15) Establishing a 5-fold cross validation method: randomly segmenting a data set into 5 mutually disjoint subsets with the same size; training the models by taking 4 subsets as a training set, and taking the remaining 1 subset as a test set test model; repeat for 5 selections.
Further, the step (2) is realized as follows:
(21) Establishing a random forest model, and inputting a current data set [ X, y],X=[X 1 ,X 2 ,...,X m ],y=[y 1 ,y 2 ,...,y m ]Weak evaluator iteration number R;
(22) Generating a final strong evaluator RF (x) based on the weak evaluator:
for s =1,2.. Times, R, the training set is randomly sampled s times, and m times are collected in total, so as to obtain a sampling set D containing m samples s (ii) a Using a sample set D s Training the s-th decision tree model G s (x) When the nodes of the decision tree model are trained, selecting a part of sample characteristics from all sample characteristics on the nodes, and selecting an optimal characteristic from the randomly selected part of sample characteristics to divide left and right subtrees of the decision tree; performing arithmetic mean on regression results obtained by the R weak learners to obtain a value which is the final model output, and defining a predicted value
Figure BDA0003909485130000031
(23) The RMSE root mean square error loss function is defined as a model evaluation index,
Figure BDA0003909485130000032
Figure BDA0003909485130000033
(24) Carrying out Bayesian optimization on the related parameters of the random forest model framework to obtain a final optimization model RFR, and obtaining the number of main optimization parameters n _ estimators base estimators, the randomly selected feature number of each decision tree of max _ features and max _ depth: the maximum depth of the tree and from this the subsequent model is constructed.
Further, the implementation process of building the GBDT model in step (3) is as follows:
for a sample x, J weak evaluators are totally arranged in the integration algorithm, and if the t-th weak evaluator is established currently, the result x on the t-th weak evaluator is represented as f (x); the result output by the entire Boosting algorithm on a sample is H (x), and is generally expressed as a weighted sum of all weak estimator results during the t = 1-t = J process:
Figure BDA0003909485130000034
wherein phi t Weight of the t-th weak evaluator;
for the current iteration, then: h t (x)=H t-1 (x)+φ t f t (x) The formula for establishing the first weak evaluator is: h 1 (x)=H 0 (x)+φ 1 f 1 (x) (ii) a Taking the RFR model generated in the step (2) as a GBDT initial prediction result evaluator; selecting the objective function of GBDT, i.e. the squared error function:
Figure BDA0003909485130000035
the root mean square error loss function is selected as an evaluation indicator for the GBDT model.
Further, the target function of the XGBoost model in step (3) is composed of a loss function and a structure risk function for each tree on the XGBoost:
Figure BDA0003909485130000036
wherein, M represents that M samples are used together on the current tree, l represents a loss function of a single sample, and after the iteration of the XGboost model is finished, a target function on the last tree is a target function of the whole XGboost algorithm;
structural windRisk function omega (f) t ) The method is composed of two parts, one part is gamma T of a control tree structure, the other part is a regular term, and the formula is as follows:
Figure BDA0003909485130000041
where γ denotes a coefficient which imposes a penalty on the objective function in accordance with the leaf total amount, and λ and α denote l which impose a penalty on the objective function in accordance with the magnitude of the leaf weight 2 Coefficient sum l 1 Coefficient, w j It represents the leaf weight of the jth leaf on the current tree.
Further, the step (4) is realized as follows:
(4.1) segmenting data into training set samples M train Test set M test
(4.2) on each individual learner, all cross-validated validation results are stacked vertically to form a prediction (M) train ,n)(M test ,n);
(4.3) transversely splicing the prediction results of all the individual learners to form a new feature matrix, wherein if N individual learners are shared, the structure of the new feature matrix is (M) train ,N)(M test ,N)
And (4.4) putting the new feature matrix into a meta-learner for training/prediction to obtain a final talent performance prediction model Mod.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: by effectively combining the mainstream Bagging and Boosting algorithms with the latest Stacking algorithm, performance is predicted based on talent related data, and the accuracy of prediction is improved; compared with the traditional single model prediction method, the talent performance prediction method based on the Bayesian optimization model fusion algorithm is improved by 9.3%, and the model precision reaches 81.6%.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow diagram of data preprocessing in an exemplary embodiment.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A large number of variables are involved in the present embodiment, and each variable will now be described as follows. As shown in table 1.
Table 1 description of variables
Figure BDA0003909485130000042
Figure BDA0003909485130000051
Figure BDA0003909485130000061
The invention provides a talent performance prediction method based on a Bayesian optimization model fusion algorithm, which comprises the following steps as shown in figure 1:
step 1: the data related to talents acquired in advance are preprocessed and divided into a training set and a test set, as shown in fig. 2.
And (1.1) reading the data set, cleaning the data, eliminating missing values and abnormal values, and acquiring tempData of the temporary data set and a feature list columns.
(1.2) encoding a discrete character string variable, traversing a feature list, defining attr as the attribute of the currently traversed feature list, tempData attr Current feature column data.
(1.3) judgment of tempData attr Whether it is a discrete string variable or not, and if so, for tempData attr And coding is carried out, the character string variables are converted into numerical value variables, and the preprocessed data set data is obtained.
And (1.4) dividing the data to generate a training set and a test set.
(1.5) establishing a 5-fold cross validation method: randomly splitting a data set into 5 mutually disjoint subsets with the same size; training the models by taking 4 subsets as a training set, and taking the remaining 1 subset as a test set test model; repeat for the possible 5 selections.
Step 2: and constructing a random forest model, and using Bayesian optimization to adjust parameters to pre-train the data set to obtain the optimized random forest model and model parameters.
(2.1) establishing a random forest model, and inputting a current data set [ X, y ]],X=[X 1 ,X 2 ,...,X m ],y=[y 1 ,y 2 ,...,y m ]Weak evaluator iteration number R.
(2.2) generating a final strong evaluator f (x) based on the weak evaluator:
for s =1,2.. Times, R, the training set is randomly sampled s times, and m times are collected in total, so as to obtain a sampling set D containing m samples s (ii) a Using a sample set D s Training the s-th decision tree model G s (x) When the nodes of the decision tree model are trained, selecting a part of sample characteristics from all sample characteristics on the nodes, and selecting an optimal characteristic from the randomly selected part of sample characteristics to divide left and right subtrees of the decision tree; performing arithmetic mean on regression results obtained by the R weak learners to obtain a value which is the final model output, and defining a predicted value
Figure BDA0003909485130000062
(2.3) defining RMSE root mean square error loss function as model evaluation index,
Figure BDA0003909485130000063
Figure BDA0003909485130000071
(2.4) carrying out Bayesian optimization on the related parameters of the random forest model framework to obtain a final optimization model RFR, and obtaining the number of main optimization parameters n _ estimators base estimators, the randomly selected feature number of each decision tree of max _ features and max _ depth: the maximum depth of the tree and from this the subsequent model is constructed.
And step 3: and (3) constructing a GBDT model and an XGboost model based on the model and parameters in the step (2), improving the accuracy and the optimization efficiency of the models, and obtaining the optimized GBDT and XBGoost models through Bayesian optimization parameter adjustment.
Establishing a GBDT model, comprising the following steps:
1) For sample x, there are a total of J weak evaluators in the integration algorithm. Assuming that the t-th weak evaluator is now being built, the result x on the t-th weak evaluator may be denoted as f (x). Assuming that the result output by the entire boosting algorithm on a sample is H (x), the result can be generally expressed as a weighted sum of all weak evaluator results during the t = 1-t = J process:
Figure BDA0003909485130000072
wherein phi t The weight of the t-th weak evaluator.
2) For the current iteration, then: h t (x)=H t-1 (x)+φ t f t (x) According to the above process, the formula for establishing the first weak evaluator is: h 1 (x)=H 0 (x)+φ 1 f 1 (x) H since there is no 0 th tree present 0 (x) The value of (2) needs to be determined separately in the mathematical process and the algorithm concrete implementation process, and the RFR model generated in the step (2) is used as an estimator of the GBDT initial prediction result.
3) According to the practical problem, an objective function of GBDT is selected, the invention selects a square error function, and the formula is as follows:
Figure BDA0003909485130000073
selecting the same model evaluation index function as the step (2.3).
Establishing an XGboost model, comprising the following steps:
1) Building XGboost model output function, unlike GBDT, for H 0 (x) XGBoost will use a fixed value.
2) Establishing an XGboost model target function aiming at XGboostEach tree consists of a loss function and a structural risk function, and the formula is as follows:
Figure BDA0003909485130000074
where M indicates that a total of M samples are now used on this tree and l indicates the loss function for a single sample. And after the model iteration is finished, the objective function on the last tree is the objective function of the whole XGboost algorithm.
3) For the structural risk function Ω (f) t ) The method is composed of two parts, one part is gamma T of a control tree structure, the other part is a regular term, and the formula is as follows:
Figure BDA0003909485130000081
where γ denotes a coefficient which imposes a penalty on the objective function in accordance with the leaf total amount, and λ and α denote l which impose a penalty on the objective function in accordance with the magnitude of the leaf weight 2 Coefficient sum l 1 Coefficient, w j Representing the leaf weight of the jth leaf on the current tree; selecting the same model evaluation index function as the step (2.3).
And carrying out Bayesian optimization on the relevant parameters of the GBDT model and the XGboost model framework to obtain final optimization models GBR and XGBR.
And 4, step 4: and taking the three optimized models as base models, and learning through a Stacking algorithm to obtain a final talent performance prediction model.
(4.1) segmenting data into training set samples M train Test set M test
(4.2) on each individual learner, all cross-validated validation results are stacked vertically to form a prediction (M) train ,n)(M test ,n)。
(4.3) transversely splicing the prediction results of all the individual learners to form a new feature matrix, wherein if N individual learners are shared, the structure of the new feature matrix is (M) train ,N)(M test ,N)。
And (4.4) putting the new feature matrix into a meta-learner for training/prediction to obtain a final talent performance prediction model Mod.
In order to better illustrate the effectiveness of the method, the UCI public data set is used for preprocessing, and a talent performance prediction method based on a Bayesian optimization model fusion algorithm is adopted, so that the method is improved by 9.3% compared with the traditional single model prediction method, and the model precision reaches 81.6%.
The above description is only an example of the present invention and is not intended to limit the present invention. All equivalents which come within the spirit of the invention are therefore intended to be embraced therein. Details not described herein are well within the skill of those in the art.

Claims (6)

1. A talent performance prediction method based on a Bayesian optimization model fusion algorithm is characterized by comprising the following steps:
(1) Preprocessing acquired talent related data, and dividing the data into a training set and a testing set;
(2) Constructing a random forest model, and using Bayesian optimization to adjust parameters to pre-train a data set to obtain an optimized random forest model and model parameters;
(3) Constructing a GBDT model and an XGboost model based on the model and parameters in the step (2), and obtaining optimized GBDT and XBGoost models through Bayesian optimization parameter adjustment;
(4) And taking the three optimized models as base models, and learning through a Stacking algorithm to obtain a final talent expression prediction model.
2. The talent performance prediction method based on Bayesian optimization model fusion algorithm according to claim 1, wherein the step (1) is implemented as follows:
(11) Reading a data set, cleaning the data, eliminating missing values and abnormal values, and acquiring tempData and a feature list columns of the temporary data set;
(12) Encoding discrete character string variables, traversing the feature list, defining attr asAttribute of the currently traversed feature list, tempData attr Current feature column data;
(13) Judging tempData attr Whether it is a discrete string variable, and if so, for tempData attr Coding is carried out, the character string variables are converted into numerical value variables, and preprocessed data sets are obtained;
(14) Dividing the data to generate a training set and a test set;
(15) Establishing a 5-fold cross validation method: randomly segmenting a data set into 5 mutually disjoint subsets with the same size; training the models by taking 4 subsets as a training set, and taking the remaining 1 subset as a test set test model; repeat for 5 selections.
3. The talent performance prediction method based on the Bayesian optimization model fusion algorithm as claimed in claim 1, wherein the step (2) is implemented as follows:
(21) Establishing a random forest model, and inputting a current data set [ X, y ]],X=[X 1 ,X 2 ,...,X m ],y=[y 1 ,y 2 ,...,y m ]The number of weak evaluators R;
(22) Generating a final strong evaluator RF (x) based on the weak evaluator:
for s =1,2.. Times, R, the training set is randomly sampled s times, and m times are collected in total, so as to obtain a sampling set D containing m samples s (ii) a Using a sample set D s Training the s-th decision tree model G s (x) When the nodes of the decision tree model are trained, selecting a part of sample characteristics from all sample characteristics on the nodes, and selecting an optimal characteristic from the randomly selected part of sample characteristics to divide left and right subtrees of the decision tree; the value obtained by carrying out arithmetic mean on regression results obtained by the R weak learners is the final model output, and the predicted value y ^ is defined 1 ,y^ 2 ,...,y^ m };
(23) The RMSE root mean square error loss function is defined as the model evaluation index,
Figure FDA0003909485120000021
Figure FDA0003909485120000022
(24) Bayesian optimization is carried out on the relevant parameters of the random forest model framework to obtain a final optimization model RFR, the number of main optimization parameters n _ estimators base estimators, the randomly selected feature number of each decision tree of max _ features and max _ depth are obtained, and a subsequent model is built according to the feature number, the randomly selected feature number and the max _ depth of the decision trees.
4. The talent performance prediction method based on the Bayesian optimization model fusion algorithm as claimed in claim 1, wherein the GBDT model building in step (3) is implemented as follows:
for a sample x, J weak evaluators are totally arranged in the integration algorithm, and a t-th weak evaluator is established currently, so that a result x on the t-th weak evaluator is expressed as f (x); the result output by the entire Boosting algorithm on the sample is H (x), and is generally expressed as a weighted sum of all weak evaluator results during the t = 1-t = J process:
Figure FDA0003909485120000023
wherein phi is t Weight of the t-th weak evaluator;
for the current iteration, then: h t (x)=H t-1 (x)+φ t f t (x) The formula for establishing the first weak evaluator is: h 1 (x)=H 0 (x)+φ 1 f 1 (x) (ii) a Taking the RFR model generated in the step (2) as a GBDT initial prediction result evaluator; selecting the objective function of GBDT, i.e. the squared error function:
Figure FDA0003909485120000024
the root mean square error loss function is selected as an evaluation indicator for the GBDT model.
5. The talent performance prediction method based on the Bayesian optimization model fusion algorithm as claimed in claim 1, wherein the objective function of the XGboost model in step (3) is composed of a loss function and a structural risk function for each tree in the XGboost:
Figure FDA0003909485120000025
wherein, M represents that M samples are used together on the tree, l represents a loss function of a single sample, and after the XGboost model is iterated, an objective function on the last tree is an objective function of the whole XGboost algorithm;
structural risk function Ω (f) t ) The method is composed of two parts, one part is gamma T of a control tree structure, the other part is a regular term, and the formula is as follows:
Figure FDA0003909485120000031
where γ denotes a coefficient that penalizes the objective function in accordance with the leaf total, and λ and α denote l that penalizes the objective function in accordance with the magnitude of the leaf weight 2 Coefficient sum l 1 Coefficient, 3 j It represents the leaf weight of the jth leaf on the current tree.
6. The talent performance prediction method based on Bayesian optimization model fusion algorithm according to claim 1, wherein the step (4) is implemented as follows:
(4.1) segmenting data into training set samples M train Test set M test
(4.2) on each individual learner, vertically stacking all cross-validated validation results to form a prediction result (M) train ,n)(M test ,n);
(4.3) transversely splicing the prediction results of all the individual learners to form a new feature matrix, wherein if N individual learners are in total, the structure of the new feature matrix is (M) train ,N)(M test ,N)
And (4.4) putting the new feature matrix into a meta-learner for training/prediction to obtain a final talent performance prediction model Mod.
CN202211327998.7A 2022-10-26 2022-10-26 Method for predicting talent performance based on Bayesian optimization model fusion algorithm Pending CN115659807A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211327998.7A CN115659807A (en) 2022-10-26 2022-10-26 Method for predicting talent performance based on Bayesian optimization model fusion algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211327998.7A CN115659807A (en) 2022-10-26 2022-10-26 Method for predicting talent performance based on Bayesian optimization model fusion algorithm

Publications (1)

Publication Number Publication Date
CN115659807A true CN115659807A (en) 2023-01-31

Family

ID=84993655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211327998.7A Pending CN115659807A (en) 2022-10-26 2022-10-26 Method for predicting talent performance based on Bayesian optimization model fusion algorithm

Country Status (1)

Country Link
CN (1) CN115659807A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116631516A (en) * 2023-05-06 2023-08-22 海南大学 Antituberculous peptide prediction system based on integration of mixed characteristic model and lifting model
CN117164103A (en) * 2023-07-03 2023-12-05 广西智碧达智慧环境科技有限公司 Intelligent control method, terminal and system of domestic sewage treatment system
CN117174313A (en) * 2023-09-03 2023-12-05 南通市康复医院(南通市第二人民医院) Method and system for establishing cerebral hemorrhage patient neural function prognosis prediction model
CN117744540A (en) * 2024-02-19 2024-03-22 青岛哈尔滨工程大学创新发展中心 Underwater operation hydrodynamic characteristic trend prediction method of underwater unmanned aircraft

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116631516A (en) * 2023-05-06 2023-08-22 海南大学 Antituberculous peptide prediction system based on integration of mixed characteristic model and lifting model
CN117164103A (en) * 2023-07-03 2023-12-05 广西智碧达智慧环境科技有限公司 Intelligent control method, terminal and system of domestic sewage treatment system
CN117174313A (en) * 2023-09-03 2023-12-05 南通市康复医院(南通市第二人民医院) Method and system for establishing cerebral hemorrhage patient neural function prognosis prediction model
CN117174313B (en) * 2023-09-03 2024-05-10 南通市康复医院(南通市第二人民医院) Method and system for establishing cerebral hemorrhage patient neural function prognosis prediction model
CN117744540A (en) * 2024-02-19 2024-03-22 青岛哈尔滨工程大学创新发展中心 Underwater operation hydrodynamic characteristic trend prediction method of underwater unmanned aircraft
CN117744540B (en) * 2024-02-19 2024-04-30 青岛哈尔滨工程大学创新发展中心 Underwater operation hydrodynamic characteristic trend prediction method of underwater unmanned aircraft

Similar Documents

Publication Publication Date Title
CN115659807A (en) Method for predicting talent performance based on Bayesian optimization model fusion algorithm
CN109783817B (en) Text semantic similarity calculation model based on deep reinforcement learning
CN108875916B (en) Advertisement click rate prediction method based on GRU neural network
CN104866578B (en) A kind of imperfect Internet of Things data mixing fill method
CN108399428A (en) A kind of triple loss function design method based on mark than criterion
CN110516757A (en) A kind of transformer fault detection method and relevant apparatus
CN110444022A (en) The construction method and device of traffic flow data analysis model
Chakrabarty A regression approach to distribution and trend analysis of quarterly foreign tourist arrivals in India
CN115829024B (en) Model training method, device, equipment and storage medium
CN118133403B (en) City planning design drawing generation method, device, equipment, medium and product
CN117253037A (en) Semantic segmentation model structure searching method, automatic semantic segmentation method and system
CN114241267A (en) Structural entropy sampling-based multi-target architecture search osteoporosis image identification method
CN112465929B (en) Image generation method based on improved graph convolution network
CN117494760A (en) Semantic tag-rich data augmentation method based on ultra-large-scale language model
CN117151095A (en) Case-based treatment plan generation method
Ortelli et al. Faster estimation of discrete choice models via dataset reduction
CN115438784A (en) Sufficient training method for hybrid bit width hyper-network
CN112417304B (en) Data analysis service recommendation method and system for constructing data analysis flow
CN105701591A (en) Power grid service classification method based on neural network
Zafar et al. An Optimization Approach for Convolutional Neural Network Using Non-Dominated Sorted Genetic Algorithm-II.
CN112132259B (en) Neural network model input parameter dimension reduction method and computer readable storage medium
CN114529794A (en) Infrared and visible light image fusion method, system and medium
CN114692888A (en) System parameter processing method, device, equipment and storage medium
CN111177015A (en) Application program quality identification method and device, computer equipment and storage medium
CN117350549B (en) Distribution network voltage risk identification method, device and equipment considering output correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination