CN104239194A - Task completion time prediction method based on BP (Back Propagation) neural network - Google Patents
Task completion time prediction method based on BP (Back Propagation) neural network Download PDFInfo
- Publication number
- CN104239194A CN104239194A CN201410464900.1A CN201410464900A CN104239194A CN 104239194 A CN104239194 A CN 104239194A CN 201410464900 A CN201410464900 A CN 201410464900A CN 104239194 A CN104239194 A CN 104239194A
- Authority
- CN
- China
- Prior art keywords
- neural network
- task
- completion time
- node
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a task completion time prediction method based on a BP (Back Propagation) neural network. The method comprises the following steps: establishing a BP neural network model, and saving corresponding completion time, data size and system information of an execution node after successful completion of a task to train the BP neural network model; performing increment training on the established BP neural network model by using relevant data at a certain time interval every time interval; when a task needs to be backed up and executed, collecting system information of all current available nodes, predicting through the established BP neural network model in combination with the data size of the task, comparing the prediction values of all available nodes, and selecting one node with the minimum prediction value for backup execution. By adopting the method disclosed by the invention, the prediction of task completion time is more accurate, so that the speculative execution efficiency is increased, the response time of the whole system is shortened, and the throughput rate is increased.
Description
Technical field
The present invention relates to a kind of distributed system optimization of job, data analysis technique field, particularly, relate to a kind of by task completion time in BP neural network prediction distributed system, thus realize inferring the method performed.
Background technology
In a distributed system, usually worked by many computing machine nodes can accomplish a task within an acceptable time or calculate simultaneously.Specifically, a task is split into multiple subtask by distributed system usually, is then assigned to different nodes and performs, finally result gathered, thus draw net result.In the process of subtasking, may, because a certain node performance is not good or break down, make the execution of this node distribute to the progress of its subtask slowly, and whole task will wait for that the slowest subtask just completes after completing.Therefore, how to judge the execution speed of subtask at a certain node, be i.e. the deadline of how predictor task, will be the key judging slow node, have crucial effect for the efficiency improving distributed system.
MapReduce parallel computation frame for current popular: for a traditional MapReduce operation (job), major node (master) goes out many tasks (task) according to the Data Division of input file, then by Map task and Reduce task matching to the different node (node) in cluster.Use when there being node and abnormally complete the task of distributing to it for a long time---this node being called the person of falling behind (straggler)---will extend the execution time of whole operation, reduce the handling capacity of cluster.For this problem, the task slowly that performs can be reallocated to other nodes by major node, and this process is called backup and performs.Inferred by certain mechanism and the person of falling behind, the process performing the execution of task backup is slowly referred to as and infers execution (speculative execution).
Hadoop is a distributed system architecture based on HDFS, MapReduce and Hbase composition, is the most widely used realization of increasing income of MapReduce.Supposition in current version Hadoop performs mainly based on (Zaharia such as Zaharia M, Matei, et al. " Improving MapReduce Performance in Heterogeneous Environments. " OSDI.Vol.8.No.4.2008.) LATE (the Longest Approximate Time to End) algorithm that proposes.This algorithm calculates excess time (residue progress/progress rate) by progress rate (progress/time), thus infer task complete the moment.This algorithm calculates the difference completing the moment completing moment and current task of the rear backup tasks of backup execution, and the several tasks backups maximum to difference perform, and starts value to reach maximum backup tasks.
But, this algorithm uses the deadline of averaging time as backup tasks of all being successfully completed of tasks, not only cannot reflect the system situation of successful execution task node, cause when system performance changes true to the forecasting inaccuracy of task completion time, but also and the deadline difference that causes of the data volume difference reckoning without different task process.
BP (Back Propagation) neural network is a kind of Multi-layered Feedforward Networks by Back Propagation Algorithm training, is one of current most widely used neural network model.BP network can learn and store a large amount of input-output mode map relations, and without the need to disclosing the math equation describing this mapping relations in advance.Utilize this characteristic of BP neural network, can predict on the basis only determining the factor affecting task completion time, thus simplify the process setting up correlation map relation.
Summary of the invention
The present invention is directed to above-mentioned deficiency of the prior art, a kind of task completion time Forecasting Methodology based on BP neural network is provided, the method is by setting up the BP neural network model relevant to system information and task data amount, make the prediction for task completion time more accurate, thus improve the efficiency inferred and perform, the response time of whole system is shortened, and throughput rises.
For achieving the above object, the present invention is achieved by the following technical solutions:
Based on a task completion time Forecasting Methodology for BP neural network, comprise step as follows:
Step 1, set up BP neural network model, after task is successfully completed, preserve the corresponding deadline, the system information of data volume and XM, for the training of BP neural network model;
Step 2, at set intervals, carries out incremental training by the related data of this time period to the BP neural network model set up;
Step 3, when have task need backup perform time, collect the system information of current all enabled nodes, in conjunction with the data volume of this task, predicted by the BP neural network model set up, the predicted value of more all enabled nodes, chooses the minimum node of predicted value and carries out backup execution.
The BP neural network model of described step 1 comprises input layer, hidden layer and output layer, wherein, input layer unit is 4, output layer unit is 1, input layer unit represents the data volume of task respectively, the system information of XM, output layer unit represent task completion time pass through normalized after value, hidden layer unit number gathers adjustment through formula to calculating and examination to determine, completes the foundation of BP neural network model.
The system information of described XM to reflect node current system conditions, determines the parameter of present node processing power, comprise CPU usage, I O occupancy and memory usage.
Described normalized the value of raw data is transformed in [0,1] interval, and the function of use is:
The method of the incremental training to BP neural network model of described step 2 is: for given input data, hidden layer is passed to from input layer, result after process is passed to output layer by weight and excitation function by hidden layer, the result of output layer is compared with correct result, obtain error, backstepping carries out feedback modifiers to the link weight in neural network again, the corresponding data regularly produced repeatedly is trained accuracy with sophisticated model to BP neural network, thus carrys out the process of study.
The concrete grammar of described feedback modifiers is: establish input data to have m group, by the task data amount in these m group data, to execute the task the CPU usage of node, I O occupancy and memory usage composition 4*m matrix, the current BP neural network set up is trained, specifically, formula a is utilized
j+1=g (θ
j* a
j) calculate the value of every each unit of one deck, wherein a
jrepresent the value of each unit of jth layer, θ
jrepresent the weight of jth layer, g (x) is excitation function, and after going out predicted value by the BP neural computing set up, by comparing with the true deadline, the weight constantly in amendment model, until convergence, namely completes this training.
Weight in described amendment model is until the method for convergence is: by error of calculation function
to each neuronic partial derivative δ of output layer
j, wherein a
3for the data of output layer, y is legitimate reading, utilizes the δ calculated
jrevise weight w, until global error
be less than threshold value or learn number of times and reach maximum times setting, wherein m is the data number for training.
The method of being undertaken predicting by BP neural network model of described step 3 is: given corresponding input data, the i.e. data volume of task, to execute the task the CPU usage of node, I O occupancy and memory usage, after being calculated by BP neural network model, the value of output layer is and predicts the outcome.
Specifically: when task needs backup to perform, first determine that current may be used for backs up the node performed, if these available node numbers are n, collect the CPU usage of this n node, I O occupancy and memory usage, data volume with task is combined into the matrix of 4*n, this Input matrix is entered the BP neural network set up, draw the predicted value of the vector of a n*1 as each node this required by task time complete, the node backup choosing predicted value minimum performs this task, completes the prediction implementation of this task.
Method of the present invention can dope the deadline of task more accurately, and make the mechanism that in distributed system, similar supposition performs more effective, thus the response time of whole system is shortened, throughput rises.Due to the parameter such as data volume of the system information and task that make use of node, make the prediction of task completion time can take into full account the impact of these parameters on task completion time, therefore predict the outcome more accurate.On the other hand, due to self study and the adaptive ability of BP neural network, make the result of prediction relatively accurate.
Accompanying drawing explanation
By reading the detailed description done non-limiting example with reference to the following drawings, other features, objects and advantages of the present invention will become more apparent.
Fig. 1 is BP neural network model of the present invention.
Embodiment
Elaborate to embodiments of the invention below, the present embodiment is implemented premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Task completion time Forecasting Methodology based on BP neural network provided by the present invention, its implementation procedure is as follows:
1, BP neural network model is set up.Setting up input layer unit is 4, and output layer unit is the BP neural network model of 1.Input layer unit represents the data volume of task respectively, the CPU usage that node of executing the task is current, I O occupancy and memory usage.Output layer unit represents that task completion time passes through function
value after normalization.Gather adjustment through formula to calculating and examination to determine the unit number of hidden layer to complete the foundation of BP neural network model, as shown in Figure 1.
2, when Mission Success completes, record the data volume of this task, the deadline, preserve the CPU usage of XM simultaneously, I O occupancy and memory usage.
3, at set intervals, collect this time period all data.Task completion time is passed through formula
normalized.If data have m group, by the task data amount in these m group data, the CPU usage of node of executing the task, the matrix of I O occupancy and memory usage composition 4*m, trains the current BP neural network set up.Specifically, formula a is utilized
j+1=g (θ
j* a
j) calculate the value of every each unit of one deck, wherein a
jrepresent the value of each unit of jth layer, θ
jrepresent the weight of jth layer, g (x) is excitation function.After going out predicted value by the BP neural computing set up, by comparing with the true deadline, the weight constantly in amendment model, until convergence, namely completes this training.Specifically, by error of calculation function
to each neuronic partial derivative δ of output layer
j, wherein a
3for the data of output layer, y is legitimate reading, utilizes the δ calculated
jrevise weight w, until global error
be less than threshold value or learn number of times and reach maximum times setting, wherein m is the data number for training.
4, when task needs backup to perform, first determine that current may be used for backs up the node performed, if these available node numbers are n, collect the CPU usage of this n node, I O occupancy and memory usage, the data volume with task is combined into the matrix of 4*n.This Input matrix is entered the neural network set up, draw the predicted value of the vector of a n*1 as each node this required by task time complete.The node backup choosing predicted value minimum performs this task, completes the prediction implementation of this task.
Although content of the present invention has done detailed introduction by above preferred embodiment, will be appreciated that above-mentioned description should not be considered to limitation of the present invention.After those skilled in the art have read foregoing, for multiple amendment of the present invention and substitute will be all apparent.Therefore, protection scope of the present invention should be limited to the appended claims.
Claims (9)
1., based on a task completion time Forecasting Methodology for BP neural network, it is characterized in that, comprise step as follows:
Step 1, set up BP neural network model, after task is successfully completed, preserve the corresponding deadline, the system information of data volume and XM, for the training of BP neural network model;
Step 2, at set intervals, carries out incremental training by the related data of this time period to the BP neural network model set up;
Step 3, when have task need backup perform time, collect the system information of current all enabled nodes, in conjunction with the data volume of this task, predicted by the BP neural network model set up, the predicted value of more all enabled nodes, chooses the minimum node of predicted value and carries out backup execution.
2. the task completion time Forecasting Methodology based on BP neural network according to claim 1, it is characterized in that, the BP neural network model of described step 1 comprises input layer, hidden layer and output layer, wherein, input layer unit is 4, output layer unit is 1, input layer unit represents the data volume of task and the system information of XM respectively, output layer unit represent task completion time pass through normalized after value, hidden layer unit number gathers adjustment through formula to calculating and examination to determine, completes the foundation of BP neural network model.
3. the task completion time Forecasting Methodology based on BP neural network according to claim 2, it is characterized in that, the system information of described XM comprises CPU usage, I O occupancy and memory usage.
4. the task completion time Forecasting Methodology based on BP neural network according to claim 3, is characterized in that, described normalized the value of raw data is transformed in [0,1] interval, and the function of use is:
5. the task completion time Forecasting Methodology based on BP neural network according to claim 4, it is characterized in that, the method of the incremental training to BP neural network model of described step 2 is: for given input data, hidden layer is passed to from input layer, result after process is passed to output layer by weight and excitation function by hidden layer, the result of output layer is compared with correct result, obtain error, backstepping carries out feedback modifiers to the link weight in neural network again, the corresponding data regularly produced repeatedly is trained accuracy with sophisticated model to BP neural network, thus carried out the process of study.
6. the task completion time Forecasting Methodology based on BP neural network according to claim 5, it is characterized in that, the concrete grammar of described feedback modifiers is: establish input data to have m group, by the task data amount in these m group data, to execute the task the matrix of the CPU usage of node, I O occupancy and memory usage composition 4*m, the current BP neural network set up is trained, specifically, formula a is utilized
j+1=g (θ
j* a
j) calculate the value of every each unit of one deck, wherein a
jrepresent the value of each unit of jth layer, θ
jrepresent the weight of jth layer, g (x) is excitation function, and after going out predicted value by the BP neural computing set up, by comparing with the true deadline, the weight constantly in amendment model, until convergence, namely completes this training.
7. the task completion time Forecasting Methodology based on BP neural network according to claim 6, is characterized in that, the weight in described amendment model is until the method for convergence is: by error of calculation function
to each neuronic partial derivative δ of output layer
j, wherein a
3for the data of output layer, y is legitimate reading, utilizes the δ calculated
jrevise weight w, until global error
be less than threshold value or learn number of times and reach maximum times setting, wherein m is the data number for training.
8. the task completion time Forecasting Methodology based on BP neural network according to claim 3, it is characterized in that, the method of being undertaken predicting by BP neural network model of described step 3 is: given corresponding input data, the i.e. data volume of task, to execute the task the CPU usage of node, I O occupancy and memory usage, by BP neural network model calculate after, the value of output layer is and predicts the outcome.
9. the task completion time Forecasting Methodology based on BP neural network according to claim 8, it is characterized in that, the concrete grammar of prediction is: when task needs backup to perform, first determine that current may be used for backs up the node performed, if these available node numbers are n, collect the CPU usage of this n node, I O occupancy and memory usage, data volume with task is combined into the matrix of 4*n, this Input matrix is entered the BP neural network set up, draw the predicted value of the vector of a n*1 as each node this required by task time complete, the node backup choosing predicted value minimum performs this task, complete the prediction implementation of this task.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410464900.1A CN104239194A (en) | 2014-09-12 | 2014-09-12 | Task completion time prediction method based on BP (Back Propagation) neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410464900.1A CN104239194A (en) | 2014-09-12 | 2014-09-12 | Task completion time prediction method based on BP (Back Propagation) neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104239194A true CN104239194A (en) | 2014-12-24 |
Family
ID=52227310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410464900.1A Pending CN104239194A (en) | 2014-09-12 | 2014-09-12 | Task completion time prediction method based on BP (Back Propagation) neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104239194A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550323A (en) * | 2015-12-15 | 2016-05-04 | 北京国电通网络技术有限公司 | Load balancing prediction method of distributed database, and predictive analyzer |
CN107273273A (en) * | 2017-06-27 | 2017-10-20 | 郑州云海信息技术有限公司 | A kind of distributed type assemblies hardware fault method for early warning and system |
CN107959692A (en) * | 2016-10-14 | 2018-04-24 | 中国电信股份有限公司 | Method and system for the equivalent load for obtaining secure resources |
CN108009023A (en) * | 2017-11-29 | 2018-05-08 | 武汉理工大学 | Method for scheduling task based on BP neural network time prediction in mixed cloud |
WO2019062409A1 (en) * | 2017-09-30 | 2019-04-04 | Oppo广东移动通信有限公司 | Method for managing and controlling background application program, storage medium, and electronic device |
CN110221909A (en) * | 2019-06-13 | 2019-09-10 | 东北大学 | A kind of Hadoop calculating task supposition execution method based on load estimation |
CN110580385A (en) * | 2019-08-21 | 2019-12-17 | 南京博阳科技有限公司 | data processing method, device and equipment for underground pipeline and computer storage medium |
CN111445125A (en) * | 2020-03-25 | 2020-07-24 | 仲恺农业工程学院 | Agricultural robot computing task cooperation method, system, medium and equipment |
CN111475298A (en) * | 2020-04-03 | 2020-07-31 | 北京字节跳动网络技术有限公司 | Task processing method, device, equipment and storage medium |
CN111930476A (en) * | 2019-05-13 | 2020-11-13 | 百度(中国)有限公司 | Task scheduling method and device and electronic equipment |
CN112286658A (en) * | 2020-10-28 | 2021-01-29 | 北京字节跳动网络技术有限公司 | Cluster task scheduling method and device, computer equipment and storage medium |
US10929057B2 (en) | 2019-02-07 | 2021-02-23 | International Business Machines Corporation | Selecting a disconnect from different types of channel disconnects using a machine learning module |
WO2021068617A1 (en) * | 2019-10-12 | 2021-04-15 | 平安国际智慧城市科技股份有限公司 | Method and apparatus for automatically predicting task processing time, electronic device and medium |
CN112749041A (en) * | 2019-10-29 | 2021-05-04 | 中国移动通信集团浙江有限公司 | Virtualized network function backup strategy self-decision method and device and computing equipment |
CN113687938A (en) * | 2021-10-27 | 2021-11-23 | 之江实验室 | Intelligent scheduling method and system for medical data calculation tasks |
CN114066089A (en) * | 2021-11-25 | 2022-02-18 | 中国工商银行股份有限公司 | Batch job operation time-consuming interval determining method and device |
US11341407B2 (en) | 2019-02-07 | 2022-05-24 | International Business Machines Corporation | Selecting a disconnect from different types of channel disconnects by training a machine learning module |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070288410A1 (en) * | 2006-06-12 | 2007-12-13 | Benjamin Tomkins | System and method of using genetic programming and neural network technologies to enhance spectral data |
CN103345656A (en) * | 2013-07-17 | 2013-10-09 | 中国科学院自动化研究所 | Method and device for data identification based on multitask deep neural network |
CN103544528A (en) * | 2013-11-15 | 2014-01-29 | 南京大学 | BP neural-network classification method based on Hadoop |
CN103645795A (en) * | 2013-12-13 | 2014-03-19 | 浪潮电子信息产业股份有限公司 | Cloud computing data center energy saving method based on ANN (artificial neural network) |
CN103699440A (en) * | 2012-09-27 | 2014-04-02 | 北京搜狐新媒体信息技术有限公司 | Method and device for cloud computing platform system to distribute resources to task |
-
2014
- 2014-09-12 CN CN201410464900.1A patent/CN104239194A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070288410A1 (en) * | 2006-06-12 | 2007-12-13 | Benjamin Tomkins | System and method of using genetic programming and neural network technologies to enhance spectral data |
CN103699440A (en) * | 2012-09-27 | 2014-04-02 | 北京搜狐新媒体信息技术有限公司 | Method and device for cloud computing platform system to distribute resources to task |
CN103345656A (en) * | 2013-07-17 | 2013-10-09 | 中国科学院自动化研究所 | Method and device for data identification based on multitask deep neural network |
CN103544528A (en) * | 2013-11-15 | 2014-01-29 | 南京大学 | BP neural-network classification method based on Hadoop |
CN103645795A (en) * | 2013-12-13 | 2014-03-19 | 浪潮电子信息产业股份有限公司 | Cloud computing data center energy saving method based on ANN (artificial neural network) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550323B (en) * | 2015-12-15 | 2020-04-28 | 北京中电普华信息技术有限公司 | Load balance prediction method and prediction analyzer for distributed database |
CN105550323A (en) * | 2015-12-15 | 2016-05-04 | 北京国电通网络技术有限公司 | Load balancing prediction method of distributed database, and predictive analyzer |
CN107959692A (en) * | 2016-10-14 | 2018-04-24 | 中国电信股份有限公司 | Method and system for the equivalent load for obtaining secure resources |
CN107273273A (en) * | 2017-06-27 | 2017-10-20 | 郑州云海信息技术有限公司 | A kind of distributed type assemblies hardware fault method for early warning and system |
WO2019062409A1 (en) * | 2017-09-30 | 2019-04-04 | Oppo广东移动通信有限公司 | Method for managing and controlling background application program, storage medium, and electronic device |
CN108009023A (en) * | 2017-11-29 | 2018-05-08 | 武汉理工大学 | Method for scheduling task based on BP neural network time prediction in mixed cloud |
CN108009023B (en) * | 2017-11-29 | 2022-06-03 | 武汉理工大学 | Task scheduling method based on BP neural network time prediction in hybrid cloud |
US11341407B2 (en) | 2019-02-07 | 2022-05-24 | International Business Machines Corporation | Selecting a disconnect from different types of channel disconnects by training a machine learning module |
US10929057B2 (en) | 2019-02-07 | 2021-02-23 | International Business Machines Corporation | Selecting a disconnect from different types of channel disconnects using a machine learning module |
CN111930476B (en) * | 2019-05-13 | 2024-02-27 | 百度(中国)有限公司 | Task scheduling method and device and electronic equipment |
CN111930476A (en) * | 2019-05-13 | 2020-11-13 | 百度(中国)有限公司 | Task scheduling method and device and electronic equipment |
CN110221909B (en) * | 2019-06-13 | 2023-01-17 | 东北大学 | Hadoop calculation task speculative execution method based on load prediction |
CN110221909A (en) * | 2019-06-13 | 2019-09-10 | 东北大学 | A kind of Hadoop calculating task supposition execution method based on load estimation |
CN110580385A (en) * | 2019-08-21 | 2019-12-17 | 南京博阳科技有限公司 | data processing method, device and equipment for underground pipeline and computer storage medium |
WO2021068617A1 (en) * | 2019-10-12 | 2021-04-15 | 平安国际智慧城市科技股份有限公司 | Method and apparatus for automatically predicting task processing time, electronic device and medium |
CN112749041A (en) * | 2019-10-29 | 2021-05-04 | 中国移动通信集团浙江有限公司 | Virtualized network function backup strategy self-decision method and device and computing equipment |
CN112749041B (en) * | 2019-10-29 | 2023-12-26 | 中国移动通信集团浙江有限公司 | Virtualized network function backup strategy self-decision method, device and computing equipment |
CN111445125B (en) * | 2020-03-25 | 2023-04-07 | 仲恺农业工程学院 | Agricultural robot computing task cooperation method, system, medium and equipment |
CN111445125A (en) * | 2020-03-25 | 2020-07-24 | 仲恺农业工程学院 | Agricultural robot computing task cooperation method, system, medium and equipment |
CN111475298B (en) * | 2020-04-03 | 2023-05-02 | 北京字节跳动网络技术有限公司 | Task processing method, device, equipment and storage medium |
CN111475298A (en) * | 2020-04-03 | 2020-07-31 | 北京字节跳动网络技术有限公司 | Task processing method, device, equipment and storage medium |
CN112286658A (en) * | 2020-10-28 | 2021-01-29 | 北京字节跳动网络技术有限公司 | Cluster task scheduling method and device, computer equipment and storage medium |
CN113687938B (en) * | 2021-10-27 | 2022-02-22 | 之江实验室 | Intelligent scheduling method and system for medical data calculation tasks |
CN113687938A (en) * | 2021-10-27 | 2021-11-23 | 之江实验室 | Intelligent scheduling method and system for medical data calculation tasks |
CN114066089A (en) * | 2021-11-25 | 2022-02-18 | 中国工商银行股份有限公司 | Batch job operation time-consuming interval determining method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104239194A (en) | Task completion time prediction method based on BP (Back Propagation) neural network | |
US11281969B1 (en) | Artificial intelligence system combining state space models and neural networks for time series forecasting | |
Chen et al. | Failure prediction of jobs in compute clouds: A google cluster case study | |
CN110858973B (en) | Cell network flow prediction method and device | |
CN111325416A (en) | Method and device for predicting user loss of taxi calling platform | |
Dias et al. | Parallel computing applied to the stochastic dynamic programming for long term operation planning of hydrothermal power systems | |
CN106796533B (en) | System and method for adaptively selecting execution mode | |
US20210019189A1 (en) | Methods and systems to determine and optimize reservoir simulator performance in a cloud computing environment | |
US7058550B2 (en) | Selectively resampling particle filter | |
CN109039428B (en) | Relay satellite single-address antenna scheduling random search method based on conflict resolution | |
CN107247651A (en) | Cloud computing platform monitoring and pre-warning method and system | |
CN106933649A (en) | Virtual machine load predicting method and system based on rolling average and neutral net | |
WO2024087512A1 (en) | Graph neural network compression method and apparatus, and electronic device and storage medium | |
Seneviratne et al. | Task profiling model for load profile prediction | |
Kamthe et al. | A stochastic approach to estimating earliest start times of nodes for scheduling DAGs on heterogeneous distributed computing systems | |
CN105760213A (en) | Early warning system and method of resource utilization rate of virtual machine in cloud environment | |
CN110990135A (en) | Spark operation time prediction method and device based on deep migration learning | |
CN100552574C (en) | Machine group loading forecast control method based on flow model | |
CN109840308B (en) | Regional wind power probability forecasting method and system | |
JP2020181318A (en) | Optimization device, optimization method, and program | |
CN114581220A (en) | Data processing method and device and distributed computing system | |
Guegan et al. | Chapter 8 Alternative Methods for Forecasting GDP | |
CN113469341A (en) | Assembly line parallel training node weight distribution method based on version difference | |
Sandholm et al. | Utility-based termination of anytime algorithms | |
RU48085U1 (en) | SYSTEM OF ACCOUNTING, PLANNING AND MONITORING AT PERFORMANCE OF ACTIONS WITH RESOURCES |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141224 |