CN105260794A - Load predicting method of cloud data center - Google Patents

Load predicting method of cloud data center Download PDF

Info

Publication number
CN105260794A
CN105260794A CN201510658479.2A CN201510658479A CN105260794A CN 105260794 A CN105260794 A CN 105260794A CN 201510658479 A CN201510658479 A CN 201510658479A CN 105260794 A CN105260794 A CN 105260794A
Authority
CN
China
Prior art keywords
historical data
cpu
prediction
data center
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510658479.2A
Other languages
Chinese (zh)
Inventor
乔梁
付周望
戚正伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201510658479.2A priority Critical patent/CN105260794A/en
Publication of CN105260794A publication Critical patent/CN105260794A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a load predicting method of a cloud data center. The method comprises steps of: acquiring historical data of the cloud data center and normalizing the historical data; computing correlation between CPU historical data and other performance indexes; extracting a time window; extracting features; fusing the features: splicing the acquired performance indexes and then inputting the spliced performance indexes into an own-coding neural network in order to further compress the spliced performance and finally obtain a common compression characteristic; performing manual intervention; performing supervised learning; and predicting a result. The method may discover a potential change signal so as to accurately grasp a change direction and get close to an actual demand. The method may increase prediction accuracy by 5 to 10 percent in practical application.

Description

A kind of load predicting method of cloud data center
Technical field
The present invention relates to data center's performance monitoring prediction field, relate generally to machine learning and degree of depth learning areas correlation technique.Specifically, mainly propose one to be merged by multiple performance characteristic in cloud computation data center, and add the mode of manual intervention, a method predicted accurately is made to the load of data center, is more applicable for the cloud data center that situation even more complex is changeable.
Background technology
Current era is the epoch of large data.According to statistics, when 2013, the internet in the whole world will produce the data (i.e. 1,000,000,000 GB) of 1EB every day, and As time goes on, the growth rate of data only can be more and more faster.Contain huge value in these data, them be utilized first will to store them.And traditional data center can not meet corresponding requirement, compared to traditional data center, cloud data center has the virtual of height, larger scale, the robotization of management, the advantages such as green energy conservation.But corresponding, cloud data center causes its state more changeable due to its all characteristics.Requirement for cloud data center state is not limited only to its condition monitoring, and the prediction for its state also becomes a hot issue.If we have a good Forecasting Methodology, just part material resources can be done better planning.If it is less to expect future load amount, can concentrate on certain server by scheduling of resource by part resource, the server closing service making residue idle is with saving resource.
It is in fact exactly the trend prediction of the time-serial position to data center server load to the prediction of data center hot spot.For this problem, have both at home and abroad much about the research in load estimation field at present, expected that the mode by different improves the accuracy of prediction.The algorithm of most of prediction work before concentrates on recurrence, moving average, noise filtering etc., and for traditional data center, they have been proved to be and have had good prediction effect.But it is most but no longer applicable for changeable cloud data center.
The machine learning occurred in recent years, mainly have studied the thinking learning behavior how computing machine simulates people, makes to predict that there has been again new development in this field.Machine learning develops into the early 21st century, and occurred the concept of " large data ", along with data volume is increasing, many traditional methods fade in some drawbacks, or analysis result is not right, or speed of convergence is too slow.2006, the Hinton professor of University of Toronto and his team published an article, and propose a kind of fast learning algorithm based on degree of depth belief network, have pulled open the gate of degree of depth study.Degree of depth study is compared and traditional machine learning, is more absorbed in the character representation extracted in data, is therefore also more suitable for the data not having label.This mode of learning is called unsupervised study.Because compared to the data having label, the data in a large number without label more easily obtain, if extract these without the effective information in label data as far as possible, the core of namely degree of depth study, therefore degree of depth study is also a kind of study mechanism closer to human brain.So the load estimation that the thinking that the degree of depth learns is applied to cloud data center can improve a lot of precision.
Even if but be there is a large amount of uncertain situation in a lot of problems, particularly cloud data center existing for degree of depth study can not solve in life to be completely difficult to precognition.Such as certain website evening one day 10 will open the concert admission ticket of a presell singer, people can expect naturally, server load will inevitably be caused to increase suddenly when by the time robbing ticket channel opener, but this point computing machine cannot be predicted, the prediction of therefore man-machine interaction may meet following predictive mode more.
Summary of the invention
The present invention is directed to existing most cloud data center architecture, creatively propose a kind of Fusion Features formula prediction algorithm with manual intervention, solve the limitation that the polytrope of current cloud data center is difficult to control well.This algorithm adopts degree of depth learning algorithm to extract feature also multiple performance index to be merged, and adds manual intervention and assist prediction, will greatly improve the precision of prediction.Between algoritic module, dependence is little simultaneously, can be placed in distributed system and carry out, substantially increase the time needed for operation.
Goal of the invention of the present invention is achieved through the following technical solutions:
A load predicting method for cloud data center, its feature is, the method comprises the steps:
Step 1, gathers the historical data of prediction cloud data center, and is normalized;
Step 2, calculate the correlativity of CPU historical data and all the other each performance index, the performance index that correlativity is greater than threshold value φ add set A.
Step 3, time window extracts: randomly draw time window, time window length sets according to actual conditions, and wherein time window front portion is as the input of prediction, and rear portion exports as prediction during training;
Step 4, feature extraction: by three layers of own coding neural network, compression is carried out for each performance index in CPU historical data set A and obtain each performance index feature;
Here limiting own coding neural network is a three-layer neural network only having a hidden layer, is set to the vector consistent with the vector exporting this network by by the output vector after neural network transformation.And the unit number of hidden layer is 60% of input layer unit number in the middle of limiting.
Step 5, Fusion Features: each performance index feature step 4 obtained is input in own coding neural network, does further compression after splicing, finally obtains a common compressive features;
Step 6, adds manual intervention: on same time window, people focus will occur for predicting, and the degree of focus makes the judgement of oneself.Namely in time series, artificial weights are added.This operation directly can be completed by simple clicking operation usually.Add weight to obtain according to following formula:
I t v = I t v + r * exp ( - ( x - t i m e ) 2 * σ 1 2 ) I t v + r * exp ( - ( x - t i m e ) 2 * σ 2 2 )
Wherein x, σ 1, σ 2be the parameter of artificial setting, represent the speed of peak value and left and right sides convergence respectively;
Step 7, supervised learning.The proper vector of CPU after feature extraction itself, the manual intervention numerical value vector of sharing feature vector sum output time section splices, and as input, moving window rear portion sequential value, as output, uses neural network to train.In the training process, the impact needing control manual intervention to produce, adds the sparse factor.Neural network cost function is modified, as shown in the formula:
J ( W , b ) = 1 2 | | h W , b ( x ) - x | | 2 + λ 2 Σ i = t + s + 1 s 1 Σ j = 1 s 2 ( W j i ( 1 ) ) 2
Wherein, t represents cpu character vector length, behalf sharing feature vector length s irepresent the i-th layer unit number.
Step 8, prediction.Train the various parameters obtained according to preceding networks, need to monitor the running status of a period of time CPU in actual mechanical process, same carry out feature extraction and merge finally being input to final mask and predicting, predicted the outcome.
The feature of usage data replaces the numerical value of data itself to predict, described historical data comprises CPU historical data, Memory history number, Disk historical data and network I/O historical data.Prediction simultaneously adds manual intervention model.
Compared with prior art, the invention has the beneficial effects as follows and can find potential variable signal by the means of feature extraction, thus grasp the direction of change more exactly.The means simultaneously adding again manual intervention are fitted actual demand more, can improve the predictablity rate of about 5-10% in actual applications.
Accompanying drawing explanation
Fig. 1 performance index Fusion Features schematic diagram
Fig. 2 neural metwork training prediction schematic diagram
Fig. 3 the inventive method process flow diagram
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
For the CPU at predicted data center, all the other performances (as Memory, Disk, network I/O etc.) index is similar.A load predicting method for cloud data center, mainly comprise a degree of depth learning characteristic and extract, one time multi-performance index merges and manual intervention.Concrete steps are as follows:
Step 1, image data.Because data center services content is different, so prediction different pieces of information center-point load needs to gather corresponding historical data.The correlation technique that data acquisition relates to does not describe in detail here, and intelligence-collecting object comprises CPU (may have multiple CPU), Memory, Disk, network I/O etc.Monitor duration is better more for a long time, is preferably not less than one month, and monitoring granularity according to the actual requirements, can do a sampled point not have 30 seconds here.Often kind of index need do normalized.
1.1 sample acquisition.A general operating system itself all carries performance monitoring instrument, as the performance monitor under Windows, the top under Linux, iostat, vmstat etc. (top and vmstat can obtain the service data of CPU and internal memory, and iostat is used for detecting disk).Also corresponding instrument is had, as the ganglia in hadoop in the cluster of some distributed systems.For a cluster, all nodes need to host node timing reported data information, and host node does the work of collecting process and prediction.
1.2 sample preprocessing.All samples need to do normalized to eliminate numerical values recited difference:
x′ i=(x i-x min)/(x max-x min)
All sample datas of such acquisition are distributed between 0 to 1.X ' iand x irepresent the data before and after normalization respectively, x maxand x minrepresent the maximal value in raw data and minimum value respectively.By becoming True Data by normalizing data convert after prediction terminates:
x i=x′ i×(x max-x min)+x min
Step 2, calculates correlativity.Calculate the correlativity of CPU historical data and all the other each performance index, the performance index that correlativity is greater than threshold value φ add set A.
Step 3, time window extracts.Randomly draw time window in a large number, time window length sets according to actual conditions, and wherein time window front portion is as the input of prediction, and rear portion exports as prediction during training.
Step 4, feature extraction.Namely three layers of own coding neural network are used to compress for performance index each in CPU historical record and set A.Own coding neural network is exactly the input layer neural network the same with output layer, arranges fewer than input and output layer by its hidden layer unit, also just reaches the object that Feature Compression extracts.The middle layer of what therefore last feature extraction went out is exactly this three-layer network, it is a vector than input layer low-dimensional.The network of the required training of feature extraction does not need repetition training, and the parameter obtained after using certain data set training once can use always.Therefore in the most of the time, the scale of network training can be very little.
Step 5, Fusion Features.Each index feature obtained in the previous step is spliced, is input to again in own coding neural network, do further compression, finally obtain a common compressive features.This sharing feature has extracted the common features of many indexes as much as possible.Fusion Features extracts schematic diagram and sees Fig. 1.4, need to determine the parameter of sparse autoencoder network in 5 liang of steps, need the parameter of regulation and control to be sparse value in autoencoder network and hidden layer unit number.Here by the mode of two dimension traversal, after own coding, the sparse value that reconstructed error is minimum and hidden layer unit number.
Step 6, manual intervention.On same time window, will there is focus for predicting in people, and the degree of focus makes the judgement of oneself.Namely in time series, artificial weights are added.This operation directly can be completed by simple clicking operation usually.Add weight to obtain according to following formula:
I t v = I t v + r * exp ( - ( x - t i m e ) 2 * σ 1 2 ) I t v + r * exp ( - ( x - t i m e ) 2 * σ 2 2 )
Wherein x, σ 1, σ 2be the parameter of artificial setting, represent the speed of peak value and left and right two survey convergence respectively.Different according to server operation service, left and right sides coverage sets depending on concrete condition.
Step 7, supervised learning.The proper vector of CPU after feature extraction itself, the manual intervention numerical value vector of sharing feature vector sum output time section splices, as input.Moving window rear portion sequential value, as output, uses neural network to train.In the training process, the impact needing control manual intervention to produce, adds the sparse factor.Neural network cost function is modified, as shown in the formula:
J ( W , b ) = 1 2 | | h W , b ( x ) - x | | 2 + λ 2 Σ i = t + s + 1 s 1 Σ j = 1 s 2 ( W j i ( 1 ) ) 2
Wherein, t represents cpu character vector length, behalf sharing feature vector length s irepresent the i-th layer unit number.Fig. 2 is shown in by supervised learning schematic diagram.The data set that training sample is larger is needed to adopt the Fast Convergent Algorithm that similar L-BFGS algorithm is the same here.Need further to adopt online pattern to carry out network parameter dynamic conditioning.
Step 8, prediction.Train the various parameters obtained according to preceding networks, need to monitor the running status of a period of time CPU in actual mechanical process, same carry out feature extraction and merge finally being input to final mask and predicting, predicted the outcome.
It should be noted last that, above embodiment is only in order to illustrate technical scheme of the present invention and unrestricted, although with reference to preferred embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that, can modify to the technical scheme of invention or equivalent replacement, and not departing from the spirit and scope of technical solution of the present invention, it all should be encompassed in the middle of right of the present invention.

Claims (2)

1. the load predicting method of a Zhong Yun data center, it is characterized in that, the method comprises the steps:
Step 1, gathers the historical data of prediction cloud data center, and is normalized;
Step 2, calculate the correlativity of CPU historical data and all the other each performance index, the performance index that correlativity is greater than threshold value φ add set A;
Step 3, time window extracts: randomly draw time window, time window length sets according to actual conditions, and wherein time window front portion is as the input of prediction, and rear portion exports as prediction during training;
Step 4, feature extraction: by three layers of own coding neural network, compression is carried out for each performance index in CPU historical data set A and obtain each performance index feature;
Step 5, Fusion Features: each performance index feature step 4 obtained is input in own coding neural network, does further compression after splicing, finally obtains a common compressive features;
Step 6, manual intervention: on same time window, will there is focus for predicting in people, and the degree of focus makes the judgement of oneself, namely in time series, adds artificial weights, adds weight and obtain according to following formula:
I t v = I t v + r * exp ( - ( x - t i m e ) 2 * σ 1 2 ) I t v + r * exp ( - ( x - t i m e ) 2 * σ 2 2 )
Wherein x, σ 1, σ 2be the parameter of artificial setting, represent the speed of peak value and left and right sides convergence respectively;
Step 7, the proper vector of supervised learning: CPU after feature extraction itself, the manual intervention numerical value vector of sharing feature vector sum output time section splices, as input; Moving window rear portion sequential value, as output, uses neural metwork training; In the training process, control the impact that manual intervention produces, add the sparse factor, neural network cost function is modified, as shown in the formula:
J ( W , b ) = 1 2 | | h W , b ( x ) - x | | 2 + λ 2 Σ i = t + s + 1 s 1 Σ j = 1 s 2 ( W j i ( 1 ) ) 2
Wherein, t represents cpu character vector length, behalf sharing feature vector length, s irepresent the i-th layer unit number;
Step 8, prediction: train the various parameters obtained according to preceding networks, the running status of monitoring a period of time CPU, carries out feature extraction and fusion, is finally input to final mask and predicts, predicted the outcome.
2. the load predicting method of cloud data center according to claim 1, it is characterized in that, the feature of data is utilized to replace the numerical value of data itself to predict, described historical data comprises CPU historical data, Memory history number, Disk historical data and network I/O historical data, and prediction simultaneously adds manual intervention model.
CN201510658479.2A 2015-10-12 2015-10-12 Load predicting method of cloud data center Pending CN105260794A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510658479.2A CN105260794A (en) 2015-10-12 2015-10-12 Load predicting method of cloud data center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510658479.2A CN105260794A (en) 2015-10-12 2015-10-12 Load predicting method of cloud data center

Publications (1)

Publication Number Publication Date
CN105260794A true CN105260794A (en) 2016-01-20

Family

ID=55100473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510658479.2A Pending CN105260794A (en) 2015-10-12 2015-10-12 Load predicting method of cloud data center

Country Status (1)

Country Link
CN (1) CN105260794A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760932A (en) * 2016-02-17 2016-07-13 北京物思创想科技有限公司 Data exchange method, data exchange device and calculating device
CN106447039A (en) * 2016-09-28 2017-02-22 西安交通大学 Non-supervision feature extraction method based on self-coding neural network
CN106713021A (en) * 2016-12-09 2017-05-24 北京奇虎科技有限公司 Method and apparatus of determining whether server in cluster needs recycling
CN107239825A (en) * 2016-08-22 2017-10-10 北京深鉴智能科技有限公司 Consider the deep neural network compression method of load balancing
CN107784372A (en) * 2016-08-24 2018-03-09 阿里巴巴集团控股有限公司 Forecasting Methodology, the device and system of destination object attribute
CN107832913A (en) * 2017-10-11 2018-03-23 微梦创科网络科技(中国)有限公司 The Forecasting Methodology and system to monitoring data trend based on deep learning
CN109039831A (en) * 2018-09-21 2018-12-18 浪潮电子信息产业股份有限公司 A kind of load detection method and device
CN109117352A (en) * 2017-06-23 2019-01-01 华为技术有限公司 Server performance prediction technique and device
CN110445629A (en) * 2018-05-03 2019-11-12 佛山市顺德区美的电热电器制造有限公司 A kind of server concurrency prediction technique and device
CN111325310A (en) * 2018-12-13 2020-06-23 中国移动通信集团有限公司 Data prediction method, device and storage medium
CN111401972A (en) * 2020-04-13 2020-07-10 支付宝(杭州)信息技术有限公司 Data processing and advertisement scoring method, device and equipment
CN111614520A (en) * 2020-05-25 2020-09-01 杭州东方通信软件技术有限公司 IDC flow data prediction method and device based on machine learning algorithm
CN111638958A (en) * 2020-06-02 2020-09-08 中国联合网络通信集团有限公司 Cloud host load processing method and device, control equipment and storage medium
CN112073239A (en) * 2020-09-04 2020-12-11 天津大学 Distributed application performance prediction method for cloud computing environment
CN112565378A (en) * 2020-11-30 2021-03-26 中国科学院深圳先进技术研究院 Cloud native resource dynamic prediction method and device, computer equipment and storage medium
CN113052271A (en) * 2021-05-14 2021-06-29 江南大学 Biological fermentation data prediction method based on deep neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678004A (en) * 2013-12-19 2014-03-26 南京大学 Host load prediction method based on unsupervised feature learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678004A (en) * 2013-12-19 2014-03-26 南京大学 Host load prediction method based on unsupervised feature learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李傲雷: "分布式Web服务器负载均衡策略的仿真与应用", 《上海交通大学学报》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017140248A1 (en) * 2016-02-17 2017-08-24 第四范式(北京)技术有限公司 Data exchange method, data exchange device and computing device
US11663460B2 (en) 2016-02-17 2023-05-30 The Fourth Paradigm (Beijing) Tech Co Ltd Data exchange method, data exchange device and computing device
CN105760932B (en) * 2016-02-17 2018-04-06 第四范式(北京)技术有限公司 Method for interchanging data, DEU data exchange unit and computing device
CN105760932A (en) * 2016-02-17 2016-07-13 北京物思创想科技有限公司 Data exchange method, data exchange device and calculating device
CN107239825A (en) * 2016-08-22 2017-10-10 北京深鉴智能科技有限公司 Consider the deep neural network compression method of load balancing
CN107239825B (en) * 2016-08-22 2021-04-09 赛灵思电子科技(北京)有限公司 Deep neural network compression method considering load balance
CN107784372A (en) * 2016-08-24 2018-03-09 阿里巴巴集团控股有限公司 Forecasting Methodology, the device and system of destination object attribute
CN106447039A (en) * 2016-09-28 2017-02-22 西安交通大学 Non-supervision feature extraction method based on self-coding neural network
CN106713021B (en) * 2016-12-09 2020-02-11 北京奇虎科技有限公司 Method and device for judging whether server in cluster needs to be recycled
CN106713021A (en) * 2016-12-09 2017-05-24 北京奇虎科技有限公司 Method and apparatus of determining whether server in cluster needs recycling
CN109117352A (en) * 2017-06-23 2019-01-01 华为技术有限公司 Server performance prediction technique and device
CN109117352B (en) * 2017-06-23 2020-08-07 华为技术有限公司 Server performance prediction method and device
CN107832913A (en) * 2017-10-11 2018-03-23 微梦创科网络科技(中国)有限公司 The Forecasting Methodology and system to monitoring data trend based on deep learning
CN110445629A (en) * 2018-05-03 2019-11-12 佛山市顺德区美的电热电器制造有限公司 A kind of server concurrency prediction technique and device
CN109039831A (en) * 2018-09-21 2018-12-18 浪潮电子信息产业股份有限公司 A kind of load detection method and device
CN111325310A (en) * 2018-12-13 2020-06-23 中国移动通信集团有限公司 Data prediction method, device and storage medium
CN111401972A (en) * 2020-04-13 2020-07-10 支付宝(杭州)信息技术有限公司 Data processing and advertisement scoring method, device and equipment
CN111614520A (en) * 2020-05-25 2020-09-01 杭州东方通信软件技术有限公司 IDC flow data prediction method and device based on machine learning algorithm
CN111614520B (en) * 2020-05-25 2021-12-14 杭州东方通信软件技术有限公司 IDC flow data prediction method and device based on machine learning algorithm
CN111638958A (en) * 2020-06-02 2020-09-08 中国联合网络通信集团有限公司 Cloud host load processing method and device, control equipment and storage medium
CN111638958B (en) * 2020-06-02 2024-04-05 中国联合网络通信集团有限公司 Cloud host load processing method and device, control equipment and storage medium
CN112073239A (en) * 2020-09-04 2020-12-11 天津大学 Distributed application performance prediction method for cloud computing environment
CN112565378A (en) * 2020-11-30 2021-03-26 中国科学院深圳先进技术研究院 Cloud native resource dynamic prediction method and device, computer equipment and storage medium
CN113052271A (en) * 2021-05-14 2021-06-29 江南大学 Biological fermentation data prediction method based on deep neural network

Similar Documents

Publication Publication Date Title
CN105260794A (en) Load predicting method of cloud data center
CN112380426B (en) Interest point recommendation method and system based on fusion of graph embedding and long-term interest of user
CN108009674A (en) Air PM2.5 concentration prediction methods based on CNN and LSTM fused neural networks
CN107704970A (en) A kind of Demand-side load forecasting method based on Spark
CN109471698B (en) System and method for detecting abnormal behavior of virtual machine in cloud environment
CN113204921B (en) Method and system for predicting remaining service life of airplane turbofan engine
CN112907970B (en) Variable lane steering control method based on vehicle queuing length change rate
CN113762338B (en) Traffic flow prediction method, equipment and medium based on multiple graph attention mechanism
CN105760649A (en) Big-data-oriented creditability measuring method
CN109614896A (en) A method of the video content semantic understanding based on recursive convolution neural network
Wu et al. Complexity to forecast flood: Problem definition and spatiotemporal attention LSTM solution
Kumar et al. Wind speed prediction using deep learning-LSTM and GRU
Liu Language database construction method based on big data and deep learning
Sun et al. Ada-STNet: A Dynamic AdaBoost Spatio-Temporal Network for Traffic Flow Prediction
Ma et al. Short-term subway passenger flow prediction based on gcn-bilstm
CN117131979A (en) Traffic flow speed prediction method and system based on directed hypergraph and attention mechanism
CN116824851A (en) Path-based urban expressway corridor traffic jam tracing method
Qu et al. Improving parking occupancy prediction in poor data conditions through customization and learning to learn
Miao et al. A queue hybrid neural network with weather weighted factor for traffic flow prediction
CN115330085A (en) Wind speed prediction method based on deep neural network and without future information leakage
CN116259172A (en) Urban road speed prediction method considering space-time characteristics of traffic network
CN114997464A (en) Popularity prediction method based on graph time sequence information learning
Lu et al. Physics guided neural network: Remaining useful life prediction of rolling bearings using long short-term memory network through dynamic weighting of degradation process
Xu et al. Special issue on emergence in human-like intelligence toward cyber-physical systems
CN112270123A (en) Basin reservoir group runoff random generation method based on convolution generation countermeasure network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160120