CN104035779A - Method for handling missing values during data stream decision tree classification - Google Patents

Method for handling missing values during data stream decision tree classification Download PDF

Info

Publication number
CN104035779A
CN104035779A CN201410295212.7A CN201410295212A CN104035779A CN 104035779 A CN104035779 A CN 104035779A CN 201410295212 A CN201410295212 A CN 201410295212A CN 104035779 A CN104035779 A CN 104035779A
Authority
CN
China
Prior art keywords
attribute
missing values
sample
decision tree
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410295212.7A
Other languages
Chinese (zh)
Inventor
吕品
侯旭珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Software of CAS
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN201410295212.7A priority Critical patent/CN104035779A/en
Publication of CN104035779A publication Critical patent/CN104035779A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to the technical field of data stream mining, and particularly relates to a method for handling missing values during data stream decision tree classification. The method includes reading data samples in data streams and updating sliding windows; updating missing handlers if the missing handlers corresponding to attributes are available when the detected attributes in the current data samples have the missing values, or adaptively selecting and creating missing handlers according to characteristics of data if the missing handlers corresponding to the attributes are not available; supplementing the missing values in the data samples by the aid of the missing handlers to obtain complete data samples, training the complete data samples according to a Hoeffding decision tree classification process and returning data stream decision tree classification results. Compared with existing methods, the method has the advantages that the method is superior in time performance, the classification accuracy of decision tree models can be sufficiently guaranteed, accordingly, the time expenditure can be reduced, the time performance can be improved, the data stream classification handling speeds can be increased, and requirements of actual data stream handling application can be met.

Description

Missing values disposal route in a kind of data stream decision tree classification
Technical field
The invention belongs to Mining Data Stream Technology field, be specifically related to the missing values disposal route in a kind of data stream decision tree classification.
Background technology
Along with the arrival of large data age, how application system at a high speed and continuously produce data stream, excavates useful information from data stream, has become the focus that technician is concerned about.Data stream Decision Tree Technologies is the important research direction during data stream is excavated, and this technology can be applied to a lot of aspects such as network invasion monitoring and credit card fraud.Can there is missing values because of reasons such as HP M, faulty sensor or manual operation errors in the data stream in reality.In data stream decision tree classification, the missing values in data stream can cause and have a strong impact on classification accuracy.But data stream can only be scanned once in mining process, cannot before mining process, take in advance to process the measure of missing values.
Document [1] is (with reference to Domingos P, Hulten G.Mining high-speed data streams[C] //Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.2000:71-80.) Hoeffding Decision-Tree Method has been proposed, utilize Hoeffding to define the data sample in reason incremental learning data stream.Hoeffding Decision-Tree Method is assigned to leaf node according to the decision tree when front construction by data sample, and leaf node defines the definite optimum Split Attribute of reason according to sample information and the Hoeffding of storage, and division becomes internal node then.Dynamically construct decision tree by continuous repetition said process, until decision tree reaches stable.
Document [2] is (with reference to Yang H, Fong S.Aerial root classifiers for predicting missing values in data stream decision tree classification[C] // 2011SIAM International Conference on Data Mining (SDM2011) .2011:28-30.) ARC (Aerial Root Classifiers) method has been proposed, on the basis of Hoeffding Decision-Tree Method, increase missing values treatment mechanism.ARC method utilizes moving window to preserve up-to-date data sample, in the time disappearance property value being detected, utilizing the sample in moving window is the property value that this attribute is set up sub-classifier prediction disappearance, and then constructs decision tree according to Hoeffding Decision-Tree Method.ARC method has designed update mechanism simultaneously, for solving the problem that sub-classifier is out-of-date.Attribute metric during according to decision tree split vertexes is each attribute assignment weight, is added the error rate of the corresponding sub-classifier of each attribute, thereby obtains overall error rate by weight.In the time that overall error rate exceedes default threshold value, select successively the attribute of weight maximum to upgrade its corresponding sub-classifier, until overall error rate meets the demands.
But, the time performance of ARC method significantly declines in the time that the characteristic attribute of data sample is more, and the important measurement index of time performance during to be data stream excavate, and has therefore had a strong impact on the time performance of ARC method, transfer efficiency is reduced, affected the using value in reality.
Summary of the invention
The technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, missing values disposal route in a kind of data stream decision tree classification is provided, according to the adaptively selected missing values disposal route of data characteristics, adopt improved Bayesian Classification Model, optimize update mechanism simultaneously, thereby reduce time overhead, promote time performance, improve the classification processing speed of data stream, thereby meet the application of actual data stream processing.
Technical scheme of the present invention is: the missing values disposal route in a kind of data stream decision tree classification, the steps include:
Step 1: the data sample in reading data flow, and use the moving window W of fixed capacity to preserve the most newly arrived data sample;
Step 2: the attribute X in current data sample iwhile there is missing values, set up or Update attribute X icorresponding disappearance processor.If attribute X idisappearance processor exist, skip to step 4 upgrade disappearance processor, otherwise enter step 3 set up disappearance processor;
Step 3: in calculating moving window W, similar sample is about attribute X istandard deviation sigma (X i), if σ is (X i) be no more than threshold value σ m, choice for use mode or mean value replace missing values, predict missing values otherwise set up sub-classifier.Set up disappearance processor and skip to step 5 according to the method;
Step 4: calculate the weighting total false rate E of disappearance processor, if E exceedes threshold value beta, select weight maximum and error rate e i> β *disappearance processor upgrade, until E is lower than threshold value beta;
Step 5: utilize disappearance processor complementary properties X imissing values, obtain complete data sample;
Step 6: train complete data sample according to Hoeffding Decision-Tree Method, dynamically construct decision-tree model, and attribute metric during according to decision tree division leaf node is each attribute X iupgrade weight;
Step 7: return data stream decision tree classification result.
Described step 3 is by the standard deviation of computational data sample, adaptively selected different missing values disposal route, the missing values of attribute in supplementary data sample.
Further, for s data sample, data attribute X={X 1, X 2..., X n, make x ijrepresent attribute X iproperty value in j sample.As attribute X iduring for Category Attributes, sample standard deviation σ (X i) be:
σ ( X i ) = 1 s - 1 Σ j = 1 s diff ( x ij , M i ) ,
diff ( x ij , M i ) = 1 ( x ij ≠ M i ) 0 ( x ij = M i ) ,
Wherein, M irepresent attribute X iat the mode that calculates property value in sample; As attribute X iduring for connection attribute, sample standard deviation σ (X i) be:
σ ( X i ) = 1 s - 1 Σ j = 1 s ( x ij - μ i ) 2 ,
μ i = Σ j = 1 s x ij ,
Wherein, μ irepresent attribute X iat the mean value that calculates property value in sample.
Further, σ mfor predefined acceptable maximum sample standard deviation.As σ (X i)≤σ mtime, select attribute X ithe mode M of the property value in similar sample ior average value mu ireplace missing values; As σ (X i) > σ mtime, utilize all Sample Establishing sub-classifiers in moving window W to predict missing values, and work as X iduring for Category Attributes, set up improved Bayesian Classification Model as sub-classifier, work as X iduring for connection attribute, set up regressive prediction model as sub-classifier.
Further, while setting up the sub-classifier of Category Attributes, select a kind of improved bayes classification method as forecast model.As the attribute X of data sample iwhen disappearance, obtain attribute X according to bayes classification method ieach property value x ijposteriority conditional probability.Improved bayes classification method is selected different property value x according to this probability size ijas disappearance attribute X ipredicted value, instead of select the property value of posteriority conditional probability maximum.
Described step 4 lacks the weighting total false rate E of processor by calculating, judging whether needs to upgrade disappearance processor, thereby solves the out-of-date problem of disappearance processor.
Further, the attribute metric during according to decision tree split vertexes for each attribute X icorresponding disappearance processor distribution weights omega i:
ω i = G ‾ ( X i ) Σ j = 1 n G ‾ ( X j ) ,
Be added the error rate e of each disappearance processor by weight i, obtain total false rate E:
E = Σ i = 1 n ω i e i .
In the time that total false rate E exceedes predefined threshold value beta, need disappearance processor to upgrade.Select successively weight maximum and error rate e i> β *disappearance processor upgrade, until E≤β, wherein β *to ensure not out-of-date threshold value and the β of disappearance processor *< β.
Compared with prior art, good effect of the present invention is: compared with missing values disposal route in existing data stream decision tree classification, the method that the present invention proposes has more excellent time performance, and can fully ensure the classification accuracy of decision-tree model, thereby reduction time overhead, promote time performance, improve the classification processing speed of data stream, thereby meet the application of actual data stream processing.
Brief description of the drawings
Fig. 1 is the main flow chart of the inventive method;
Fig. 2 is the process flow diagram of setting up disappearance processor;
Fig. 3 is the process flow diagram that upgrades disappearance processor.
Embodiment
Below in conjunction with Figure of description, the specific embodiment of the present invention is described in detail.
The main flow chart of the inventive method is as shown in Figure 1:
(1) adaptively selected and foundation disappearance processor
Adaptively selected and set up disappearance processor idiographic flow as shown in Figure 2, the steps include:
Step 1: the attribute X in current data sample detected ithere is missing values;
Step 2: read all samples similar with current data sample in moving window W, calculate in similar sample about attribute X istandard deviation sigma (X i);
Step 3: preset σ mfor acceptable maximum sample standard deviation, if σ is (X i) be no more than threshold value σ m, enter step 4, otherwise enter step 5;
Step 4: select the average method of substitution to set up disappearance processor, work as X iduring for Category Attributes, use attribute X ithe mode of the property value in the similar sample of moving window W replaces missing values, works as X iduring for connection attribute, use attribute X ithe mean value of the property value in the similar sample of moving window W replaces missing values;
Step 5: chooser sorter predicted method is set up disappearance processor, works as X iduring for Category Attributes, set up improved Bayesian Classification Model prediction missing values, work as X iduring for connection attribute, set up regressive prediction model and predict missing values;
In described step 3, for s data sample, data attribute X={X 1, X 2..., X n, make x ijrepresent attribute X iproperty value in j sample.As attribute X iduring for Category Attributes, sample standard deviation σ (X i) be:
&sigma; ( X i ) = 1 s - 1 &Sigma; j = 1 s diff ( x ij , M i ) ,
diff ( x ij , M i ) = 1 ( x ij &NotEqual; M i ) 0 ( x ij = M i ) ,
Wherein, M irepresent attribute X iat the mode that calculates the property value in sample.As attribute X iduring for connection attribute, sample standard deviation σ (X i) be:
&sigma; ( X i ) = 1 s - 1 &Sigma; j = 1 s ( x ij - &mu; i ) 2 ,
&mu; i = &Sigma; j = 1 s x ij ,
Wherein, μ irepresent attribute X iat the mean value that calculates the property value in sample.
In described step 5, adopt the forecast model of a kind of improved Bayesian Classification Model as Category Attributes.As the Category Attributes X of data sample iwhile there is missing values, set up improved Bayesian Classification Model NB i(X') → X ipredict attribute X imissing values, wherein X'=X-X i+ C, C is the classification under data sample.Suppose attribute X ithere is v value, according to Bayesian Classification Model NB iobtain posteriority conditional probability P (x ij| X') j=1,2 ..., v, wherein x ijfor attribute X iv value.Obtaining the posteriority conditional probability P (x of each property value ij| X') after, different property value x selected according to this probability size ijas attribute X ipredicted value, instead of select the property value of maximum probability.
(2) upgrade disappearance processor
Upgrade the idiographic flow of disappearance processor as shown in Figure 3, the steps include:
Step 1: attribute X ithe error rate of corresponding disappearance processor is e i, weight is ω i, be added the error rate of all disappearance processors by weight, calculate the weighting total false rate E of disappearance processor;
Step 2: presetting β is maximum weighted total false rate, if E exceedes threshold value beta and has the disappearance processor not upgrading, enters step 3, otherwise enters step 4;
Step 3: preset β *for ensureing the not out-of-date error rate of disappearance processor, and β *< β, selects weight maximum and error rate to exceed β *disappearance processor upgrade, and again perform step 1;
Step 4: exit more new technological process.
In described step 1, the weights omega of disappearance processor iattribute metric during according to decision tree split vertexes calculates, each attribute X iattribute metric be attribute X ithe weights omega of corresponding disappearance processor ifor:
&omega; i = G &OverBar; ( X i ) &Sigma; j = 1 n G &OverBar; ( X j ) ,
The weighting total false rate E of all disappearance processors is:
E = &Sigma; i = 1 n &omega; i e i .
(3) overall design of the inventive method
Specific implementation step of the present invention is:
Step 1: the data sample in reading data flow, and use the moving window W of fixed capacity to preserve the most newly arrived data sample;
Step 2: as the attribute X in data sample iwhile there is missing values, set up or Update attribute X icorresponding disappearance processor.If disappearance processor exists, skip to step 4 and upgrade disappearance processor, set up disappearance processor otherwise enter step 3;
Step 3: set up disappearance processor and skip to step 5 according to flow process shown in Fig. 2 and step;
Step 4: upgrade disappearance processor according to flow process shown in Fig. 3 and step;
Step 5: utilize disappearance processor complementary properties X imissing values, obtain complete data sample;
Step 6: train complete data sample according to Hoeffding Decision-Tree Method, dynamically construct decision-tree model, and attribute metric during according to decision tree division leaf node is each attribute X iupgrade weight;
Step 7: return data stream decision tree classification result.

Claims (5)

1. the missing values disposal route in data stream decision tree classification, its feature is as follows at performing step:
Step 1: the data sample in reading data flow, and use the moving window W of fixed capacity to preserve the most newly arrived data sample;
Step 2: the attribute X in current data sample iwhile there is missing values, set up or Update attribute X icorresponding disappearance processor, if attribute X idisappearance processor exist, skip to step 4 upgrade disappearance processor, otherwise enter step 3 set up disappearance processor;
Step 3: in calculating moving window W, similar sample is about attribute X istandard deviation sigma (X i), if σ is (X i) be no more than threshold value σ m, choice for use mode or mean value replace missing values, predict missing values otherwise set up sub-classifier, set up disappearance processor and skip to step 5 according to the method;
Step 4: calculate the weighting total false rate E of disappearance processor, if E exceedes threshold value beta, select weight maximum and error rate e i> β *disappearance processor upgrade, until E is lower than threshold value beta;
Step 5: utilize disappearance processor complementary properties X imissing values, obtain complete data sample;
Step 6: train complete data sample according to Hoeffding Decision-Tree Method, dynamically construct decision-tree model, and attribute metric during according to decision tree division leaf node is each attribute X iupgrade weight;
Step 7: return data stream decision tree classification result.
2. the missing values disposal route in a kind of data stream decision tree classification according to claim 1, is characterized in that: in described step 3, calculate in moving window W similar sample about attribute X istandard deviation sigma (X i) method be:
For s data sample, data attribute X={X 1, X 2..., X n, make x ijrepresent attribute X iproperty value in j sample, as attribute X iduring for Category Attributes, sample standard deviation σ (X i) be:
&sigma; ( X i ) = 1 s - 1 &Sigma; j = 1 s diff ( x ij , M i ) ,
diff ( x ij , M i ) = 1 ( x ij &NotEqual; M i ) 0 ( x ij = M i ) ,
Wherein, M irepresent attribute X iat the mode that calculates property value in sample; As attribute X iduring for connection attribute, sample standard deviation σ (X i) be:
&sigma; ( X i ) = 1 s - 1 &Sigma; j = 1 s ( x ij - &mu; i ) 2 ,
&mu; i = &Sigma; j = 1 s x ij ,
Wherein, μ irepresent attribute X iat the mean value that calculates property value in sample.
3. the missing values disposal route in a kind of data stream decision tree classification according to claim 1, is characterized in that: the method for setting up missing values processor in described step 3 is: preset σ mfor acceptable maximum sample standard deviation, as σ (X i)≤σ mtime, select attribute X ithe mode M of the property value in similar sample ior average value mu ireplace missing values; As σ (X i) > σ mtime, utilize all Sample Establishing sub-classifiers in moving window W to predict missing values, and work as X iduring for Category Attributes, set up improved Bayesian Classification Model as sub-classifier, work as X iduring for connection attribute, set up regressive prediction model as sub-classifier.
4. the missing values disposal route in a kind of data stream decision tree classification according to claim 3, it is characterized in that: describedly set up improved Bayesian Classification Model and as the method for sub-classifier be: while setting up the sub-classifier of Category Attributes, adopt a kind of improved bayes classification method as forecast model, as the attribute X of data sample iwhen disappearance, obtain attribute X according to bayes classification method ieach property value x ijposteriority conditional probability, improved bayes classification method is selected different property value x according to this probability size ijas disappearance attribute X ipredicted value, instead of select the property value of posteriority conditional probability maximum.
5. the missing values disposal route in a kind of data stream decision tree classification according to claim 1, is characterized in that: in described step 4, specific implementation process is:
Attribute metric during according to decision tree split vertexes for each attribute X idisappearance processor distribution weights omega i:
&omega; i = G &OverBar; ( X i ) &Sigma; j = 1 n G &OverBar; ( X j ) ,
Be added the error rate e of each disappearance processor by weight i, obtain total false rate E:
E = &Sigma; i = 1 n &omega; i e i ,
In the time that total false rate E exceedes predefined threshold value beta, need disappearance processor to upgrade, select successively weight maximum and error rate e i> β *disappearance processor upgrade, until E≤β, wherein β *to ensure not out-of-date threshold value and the β of disappearance processor *< β.
CN201410295212.7A 2014-06-25 2014-06-25 Method for handling missing values during data stream decision tree classification Pending CN104035779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410295212.7A CN104035779A (en) 2014-06-25 2014-06-25 Method for handling missing values during data stream decision tree classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410295212.7A CN104035779A (en) 2014-06-25 2014-06-25 Method for handling missing values during data stream decision tree classification

Publications (1)

Publication Number Publication Date
CN104035779A true CN104035779A (en) 2014-09-10

Family

ID=51466554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410295212.7A Pending CN104035779A (en) 2014-06-25 2014-06-25 Method for handling missing values during data stream decision tree classification

Country Status (1)

Country Link
CN (1) CN104035779A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930303A (en) * 2016-04-11 2016-09-07 中国石油大学(华东) Robust estimation method for estimating equation containing non-ignorable missing data
CN106156260A (en) * 2015-04-28 2016-11-23 阿里巴巴集团控股有限公司 The method and apparatus that a kind of shortage of data is repaired
CN106354753A (en) * 2016-07-31 2017-01-25 信阳师范学院 Bayes classifier based on pattern discovery in data flow
CN106919706A (en) * 2017-03-10 2017-07-04 广州视源电子科技股份有限公司 Data updating method and device
CN107249434A (en) * 2015-02-12 2017-10-13 皇家飞利浦有限公司 Robust classification device
CN108650065A (en) * 2018-03-15 2018-10-12 西安电子科技大学 Stream data based on window lacks processing method
CN109558436A (en) * 2018-11-03 2019-04-02 北京交通大学 Air station flight delay causality method for digging based on entropy of transition
CN110647519A (en) * 2019-08-30 2020-01-03 中国平安人寿保险股份有限公司 Method and device for predicting missing attribute value in test sample
CN110926524A (en) * 2019-10-29 2020-03-27 中国电子科技集团公司第七研究所 Method for predicting coupling relation between network resources and environment
CN113760484A (en) * 2020-06-29 2021-12-07 北京沃东天骏信息技术有限公司 Data processing method and device
CN113780666A (en) * 2021-09-15 2021-12-10 湖北天天数链技术有限公司 Missing value prediction method and device and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6871201B2 (en) * 2001-07-31 2005-03-22 International Business Machines Corporation Method for building space-splitting decision tree
CN101719147A (en) * 2009-11-23 2010-06-02 合肥兆尹信息科技有限责任公司 Rochester model-naive Bayesian model-based data classification system
CN102750286A (en) * 2011-04-21 2012-10-24 常州蓝城信息科技有限公司 Novel decision tree classifier method for processing missing data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6871201B2 (en) * 2001-07-31 2005-03-22 International Business Machines Corporation Method for building space-splitting decision tree
CN101719147A (en) * 2009-11-23 2010-06-02 合肥兆尹信息科技有限责任公司 Rochester model-naive Bayesian model-based data classification system
CN102750286A (en) * 2011-04-21 2012-10-24 常州蓝城信息科技有限公司 Novel decision tree classifier method for processing missing data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
侯旭珊等: "《2014 International Conference on Artificial Intelligence and Software Engineering(AISE2014)》", 6 February 2014 *
杨航等: "《Aerial root classifiers for predicting missing values in data stream decision tree classification》", 《2011 SIAM INTERNATIONAL CONFERENCE ON DATA MINING》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107249434A (en) * 2015-02-12 2017-10-13 皇家飞利浦有限公司 Robust classification device
US10929774B2 (en) 2015-02-12 2021-02-23 Koninklijke Philips N.V. Robust classifier
CN106156260B (en) * 2015-04-28 2020-01-21 阿里巴巴集团控股有限公司 Method and device for repairing missing data
CN106156260A (en) * 2015-04-28 2016-11-23 阿里巴巴集团控股有限公司 The method and apparatus that a kind of shortage of data is repaired
CN105930303A (en) * 2016-04-11 2016-09-07 中国石油大学(华东) Robust estimation method for estimating equation containing non-ignorable missing data
CN106354753A (en) * 2016-07-31 2017-01-25 信阳师范学院 Bayes classifier based on pattern discovery in data flow
CN106919706A (en) * 2017-03-10 2017-07-04 广州视源电子科技股份有限公司 Data updating method and device
CN108650065A (en) * 2018-03-15 2018-10-12 西安电子科技大学 Stream data based on window lacks processing method
CN108650065B (en) * 2018-03-15 2021-09-10 西安电子科技大学 Window-based streaming data missing processing method
CN109558436B (en) * 2018-11-03 2023-03-14 北京交通大学 Airport flight delay cause and effect relationship mining method based on transfer entropy
CN109558436A (en) * 2018-11-03 2019-04-02 北京交通大学 Air station flight delay causality method for digging based on entropy of transition
CN110647519A (en) * 2019-08-30 2020-01-03 中国平安人寿保险股份有限公司 Method and device for predicting missing attribute value in test sample
CN110647519B (en) * 2019-08-30 2023-10-03 中国平安人寿保险股份有限公司 Method and device for predicting missing attribute value in test sample
CN110926524A (en) * 2019-10-29 2020-03-27 中国电子科技集团公司第七研究所 Method for predicting coupling relation between network resources and environment
CN110926524B (en) * 2019-10-29 2022-01-04 中国电子科技集团公司第七研究所 Method for predicting coupling relation between network resources and environment
CN113760484A (en) * 2020-06-29 2021-12-07 北京沃东天骏信息技术有限公司 Data processing method and device
CN113780666A (en) * 2021-09-15 2021-12-10 湖北天天数链技术有限公司 Missing value prediction method and device and readable storage medium
CN113780666B (en) * 2021-09-15 2024-03-22 湖北天天数链技术有限公司 Missing value prediction method and device and readable storage medium

Similar Documents

Publication Publication Date Title
CN104035779A (en) Method for handling missing values during data stream decision tree classification
US10366095B2 (en) Processing time series
Babazadeh et al. Application of particle swarm optimization to transportation network design problem
US11741373B2 (en) Turbulence field update method and apparatus, and related device thereof
CN110705607B (en) Industry multi-label noise reduction method based on cyclic re-labeling self-service method
CN111860989B (en) LSTM neural network short-time traffic flow prediction method based on ant colony optimization
CN112348519A (en) Method and device for identifying fraudulent user and electronic equipment
CN102469103A (en) Trojan event prediction method based on BP (Back Propagation) neural network
CN116307215A (en) Load prediction method, device, equipment and storage medium of power system
CN113379059B (en) Model training method for quantum data classification and quantum data classification method
CN105809280A (en) Prediction method for airport capacity demands
EP4040421B1 (en) Method and apparatus for predicting traffic data and electronic device
CN106100922A (en) The Forecasting Methodology of the network traffics of TCN and device
US20210081800A1 (en) Method, device and medium for diagnosing and optimizing data analysis system
CN110059938B (en) Power distribution network planning method based on association rule driving
CN114119110A (en) Project cost list collection system and method thereof
CN115359799A (en) Speech recognition method, training method, device, electronic equipment and storage medium
CN116933946A (en) Rail transit OD passenger flow prediction method and system based on passenger flow destination structure
US20230410474A1 (en) Method and apparatus for training relationship recognition model and method and apparatus for analyzing image
CN112347776B (en) Medical data processing method and device, storage medium and electronic equipment
CN116739650A (en) Method, device, equipment and storage medium for updating behavior data prediction model
CN116385059A (en) Method, device, equipment and storage medium for updating behavior data prediction model
CN116502649A (en) Training method and device for text generation model, electronic equipment and storage medium
CN109961085A (en) The method for building up and device of flight delay prediction model based on Bayesian Estimation
CN107404120B (en) Equipment action frequency mining method in reactive power optimization online control

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140910

WD01 Invention patent application deemed withdrawn after publication