CN110231976A - A kind of edge calculations platform container dispositions method and system based on load estimation - Google Patents

A kind of edge calculations platform container dispositions method and system based on load estimation Download PDF

Info

Publication number
CN110231976A
CN110231976A CN201910420328.1A CN201910420328A CN110231976A CN 110231976 A CN110231976 A CN 110231976A CN 201910420328 A CN201910420328 A CN 201910420328A CN 110231976 A CN110231976 A CN 110231976A
Authority
CN
China
Prior art keywords
node
container
load
original
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910420328.1A
Other languages
Chinese (zh)
Other versions
CN110231976B (en
Inventor
伍卫国
康益菲
徐一轩
杨傲
崔舜�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910420328.1A priority Critical patent/CN110231976B/en
Publication of CN110231976A publication Critical patent/CN110231976A/en
Application granted granted Critical
Publication of CN110231976B publication Critical patent/CN110231976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Abstract

The invention discloses a kind of edge calculations platform container dispositions method and system based on load estimation, including multiple calculate nodes and a central node, an original load monitoring system is carried in calculate node, original load monitoring system is connected respectively at node load forecasting system, and central server is uploaded to by node load forecasting system, node load forecasting system and calculating task management system are equipped on central node, the LSTM model of corresponding each calculate node is provided in node load forecasting system, the result of prediction is simultaneously sent to calculating task management system by the original load information of node load forecasting system receiving node;Calculating task management system is responsible for the deployment of container, and information feedback node number, task time give node load forecasting system to calculating task management system based on the received, and issue container to available calculate node.The reasonable deployment container of the present invention reduces the cost of calculating task to each calculate node.

Description

A kind of edge calculations platform container dispositions method and system based on load estimation
Technical field
The invention belongs to edge calculations platform technology fields, and in particular to a kind of edge calculations platform based on load estimation Container dispositions method and system.
Background technique
Recently as the development of mobile Internet, explosive growth is presented in internet data amount, more and more to interconnect Network service is also all based on the analysis to big data.These have resulted in the demand to computing resource and have rapidly been promoted.The calculating of single machine Ability is no longer satisfied demand.Therefore cloud computing is come into being.Cloud computing is distributed computing, parallel computation, virtualization, bears Carry the product of the traditional computers such as equilibrium and network technical development fusion.A large amount of server is passed through virtual machine technique by cloud computing Virtual is that computing resource node, user are not necessarily to be concerned about the realization and maintenance of hardware one by one, it is only necessary to which purchase calculates money beyond the clouds Source can quickly obtain the resource needed for oneself.Traditional cloud computing platform is constructed by high-performance server.High-performance clothes Device performance of being engaged in is high, but it is expensive, and power consumption is high, and thermal power is big, the computer room for needing specially to design.O&M expense at For the major part of cloud computing cluster cost.Meanwhile centralization situation is presented in cloud computing, no matter user is in where, access Computing resource all leave concentratedly in cloud computation data center.The network cost that will lead to user in this way is high.Edge calculations are proper Solves the above problem well.Edge calculations are geographically distributed cloud computings, all computing resource collection of cloud computing in it and tradition In in a computer room difference, it transfers cloud computing node to user at one's side, and its node is often not dedicated high-performance Server but user already existing equipment, such as mobile network base station at one's side, intelligent router, smart phone etc..Edge meter Geographically distributed design not only reduces the network cost of user, at the same also reduce the O&M of computing resource provider at This.
Container is a kind of novel software standard unit, it includes the various dependences of software ontology and its operation needs, can It is deployed in various calculating environment with quickly believable.Simultaneously as container does not virtualize hardware, therefore without virtual The loss of change, compared to the not high situation of computing node performance is more suitable for for virtual machine, the calculate node of edge calculations is big Part is just so.Since edge calculations are distributed, while in order to make full use of computing resource, therefore one can be calculated Task is divided into subtask appropriate, and each subtask is packaged into a container, is then deployed on each node Operation.Therefore one group of container for how including a calculating task is deployed to each calculate node, how to guarantee the same period just The reasonable occupancy of often operation and node resource is very important.Various algorithms are proposed to this industry.Container common at present Deployment Algorithm is mostly balanced each node load under the premise of meeting container resource requirement.Some algorithms multi objective can more will be included in Limit of consideration, such as node energy consumption, network delay etc., with realize energy conservation, improve service quality the purpose of.Existing container deployment is calculated It is still to have larger blank for the container Deployment Algorithm of non-dedicated node for dedicated node that method is still mostly.Meanwhile existing appearance What device Deployment Algorithm often considered is the present load of node, and container often runs a period of time, therefore container was run It is likely to occur the problem of node resource deficiency in journey, just needs to carry out container migration by feedback mechanism at this time, and container moves It moves and will lead to unnecessary network and storage overhead again, increase cost.
Summary of the invention
In view of the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is that providing a kind of based on load The edge calculations platform container dispositions method and system of prediction only utilize node present load to believe for current container Deployment Algorithm The problem of breath causes feedback to lag is guaranteeing node original that is, by the resources occupation rate of nearly a period of time on prediction node Under the premise of operational excellence of being engaged in, reasonable deployment container to each calculate node.
The invention adopts the following technical scheme:
A kind of edge calculations platform container deployment system based on load estimation, comprising:
Multiple calculate nodes and a central node, appointing there are edge calculations platform distribution in each calculate node Long-term running task except business, referred to as ancestral task, load are known as original load, and central node is dedicated;
An original load monitoring system is carried in each calculate node, each original load monitoring system is respectively at node Load estimation system connection, for the occupation condition of collector node ancestral task, and by node load forecasting system Central server is reached, for prediction model training and node load prediction;
Node load forecasting system and calculating task management system are equipped on central node, in node load forecasting system It is provided with the LSTM model of corresponding each calculate node, the original load information of node load forecasting system receiving node and by prediction As a result it is sent to calculating task management system;
Calculating task management system is responsible for the deployment of container, calculating task management system information feedback node based on the received Number, task time gives node load forecasting system, and issues container to available calculate node.
Specifically, original load monitoring system runs an original minus on each node for not having calculating task to run Information carrying breath collects finger daemon, and original load information collects the finger daemon every 5 minutes original load informations for collecting a minor node, Constitute original load vector L (t)=(φ of one 4 dimension1234), it is stored in database, is not there is calculating task fortune An original load information is run on each capable node and uploads finger daemon, and original load information uploads finger daemon every 60 Original load vector in 60 minutes is read from database and is uploaded to central server by minute.
Further, original load information includes CPU usage, GPU utilization rate, memory residual capacity and Web vector graphic Rate.
Specifically, the network input layer of LSTM model is three-dimensional data [samples, time_steps, features], Samples is training samples number;Time_steps is time step, i.e., the input of each data and how many a time points before Data are related;Features is characterized value, i.e. load vector L (t).
Further, using the original load information after R original load information prediction k times;Prediction horizon is set as The integral multiple that k, k are time interval s=5 minutes;F indicates that the model to be solved, L (t+k) indicate the original minus of t+k moment node Information carrying breath, predictive behavior are as follows:
L (t+k)=f (L (t-R+1), L (t-R+2), L, L (t-1), L (t))
Wherein, the inputoutput pair of f function constitutes outputting and inputting for the data set of needs;One LSTM input is defeated Data pair out are as follows:
<(L(t-R+1),L(t-R+2),L,L(t-1),L(t)),L(t+k)>。
Specifically, calculating task management system is used to be responsible for the deployment of container, when calculating task management system is connected to one After task, task is divided into the s one's share of expenses for a joint undertaking task for being suitable for running on node according to task input size, is packed into s container In;Estimate maximum load information L' in single container operational processmaxAnd running time T;Then pass through the LSTM mould of each node Type predicts the original load information L of each node in following T timei(t), if sharing n node, current time t1, then have 0 < i≤n,t1<t<t1+T;Then s are selected and had both been able to satisfy container payload demand i.e. L'max+Li(t) < 1 and maximum resource utilization That is max (L'max+Li(t)) node is enabled node;Finally s container is deployed to respectively on enabled node and is run, and Subtask result is obtained after the completion of operation and is merged into final result.
Another technical solution of the invention is, a kind of edge calculations platform container deployment system based on load estimation Working method, comprising the following steps:
Calculating task is divided into S one's share of expenses for a joint undertaking task by S1, calculating task management system, is packed respectively into container, according to The maximum load information Lmax and running time T in task configuration file estimation single container operational process that family is submitted;
S2, calculating task management system initialize node, then by current in node load predictive system T time The original load information L of node judges in T time whether present node surplus resources meet needs, and whether maximum resource It utilizes, enabled node list then is added if meeting needs and maximum resource utilizes;
S3, when in T time present node surplus resources be unsatisfactory for need or without maximum resource utilize or available section When there is no S node in point list, to node number plus 1 and return step S2;
If S4, having traversed all nodes, container is distributed on enabled node, terminates deployment.
Specifically, if whether having there is S node in enabled node list, container is distributed to available in step S2 On node, terminate deployment.
Compared with prior art, the present invention at least has the advantages that
Edge calculations platform container dispositions method proposed by the present invention based on resources, abundant consideration edge calculations Node is non-dedicated node, it is ensured that the status that ancestral task operates normally.It is carried out by the resource occupation to ancestral task pre- It surveys, the deployment of container is rationally carried out in the case where guaranteeing that ancestral task operates normally.Meanwhile compared to conventional container deployment side Method is based on present information, and the container deployment based on resources is based on following information, therefore avoids tradition The problem of method feedback lag, reduces container caused by node resource deficiency and migrates.
Further, the non-dedicated feature of calculate node is fully considered using original load monitoring system, targetedly received Collect the load information of ancestral task, it is more reasonable compared to direct monitoring node load situation.
Further, based on the load information of prediction come deployment container, compared to the load for only considering node current time, It has fully considered the characteristics of calculating task can execute a period of time, has avoided caused by node resource is insufficient during task run Task immigration avoids resource consumption caused by migration.
Further, judge whether present node surplus resources meet needs in T time, and judge whether maximum resource It utilizes, compared to only judging whether node surplus resources meet needs and can more fully utilize node resource, reduces waste.
In conclusion present invention proposition is considered based on edge by the edge calculations platform container dispositions method of resources The status of non-dedicated calculate node in calculation, while the method for taking resources avoids the hysteresis quality of feedback, it can more adduction The deployment container of reason reduces the cost of calculating task to each calculate node.
Below by drawings and examples, technical scheme of the present invention will be described in further detail.
Detailed description of the invention
Fig. 1 is present system structure chart;
Fig. 2 is the basic structure of LSTM of the present invention;
Fig. 3 is inventive container dispositions method flow chart.
Specific embodiment
Due to the calculate node of edge calculations be often it is non-dedicated, before calculating task is issued to node, on node A long-term running task is had existed, this task is called ancestral task, caused resource occupation is known as original minus It carries.
Referring to Fig. 1, the edge calculations platform container deployment system provided by the invention based on load estimation, including it is original Load monitoring system, node load forecasting system and calculating task management system, original load monitoring system longtime running are being counted On operator node, node load forecasting system and calculating task management system longtime running are in central node, multiple calculate nodes point It is not connect with node load forecasting system, for the original load information of node to be sent to node load forecasting system, node is negative It carries the original load information of forecasting system receiving node and extremely counts the result for needing to return prediction according to calculating task management system Task management system is calculated, calculating task management system selects available calculate node according to the prediction result received, and issues Container is to available calculate node.
A, original load monitoring system
The occupation condition of the ancestral task of original load monitoring system collector node, and it is uploaded to center service Device, for prediction model training and node load prediction.
The original load information of operation one collects finger daemon (referred to as on each node for not having calculating task to run Collectd), the original load information of a minor node is collected within collectd every 5 minutes, original load information includes: that CPU is used Rate, GPU utilization rate, memory residual capacity, network usage.The load vector L (t) of one 4 dimension of original load information composition= (φ1234), it is stored in database.
The original load information of operation one uploads finger daemon (referred to as on each node for not having calculating task to run Uploadd), uploadd every 60 minutes the aforementioned load vector in 60 minutes is genuinely convinced from reading and being uploaded in database Business device.
B, node load forecasting system
Due to longtime running ancestral task on node, the original minus carrier of node has cyclophysis.For this spy Property propose a kind of original load predicting method of node for being based on shot and long term Memory Neural Networks (LSTM), each node is owned by The LSTM model of oneself.
LSTM network input layer is three-dimensional data [samples, time_steps, features], specifically:
Samples is training sample;
Time_steps indicates that time step, i.e., each data are related with the input data at how many a time points before;
Features indicates characteristic value, i.e. load vector L (t).
Original load information after predicting the k time using nearest R original load informations, i.e. time_steps= R;Prediction horizon is set as k, and k need to be time interval s=5 minutes integral multiple;That is, will use time point t and its recently 12 original load informations, carry out the original load information of predicted time point t+k.
Indicate that the model to be solved, L (t+k) indicate the original load information of t+k moment node with f, predictive behavior is as follows:
L (t+k)=f (L (t-R+1), L (t-R+2), L, L (t-1), L (t))
Wherein, the inputoutput pair of f function constitutes outputting and inputting for the data set of needs.
So<(L (t-R+1), L (t-R+2), L, L (t-1), L (t)), L (the t+k)>LSTM input and output number of composition one According to right.
Referring to Fig. 2, whether each basic unit of LSTM acts on there are three valve come the network memory state before adjusting In the calculating of current network.
Three doors, which are respectively as follows:, forgets door (forget gate), input gate (input gate) and out gate (output gate)。
Forget the implicit layer state h before door (forget gate) controln-1How many remains into current time hn
The input X of input gate (input gate) control current time networknHow many is saved in implicit layer state hn
Out gate (output gate) controls implicit layer state hnHow many is output to current time output valve Yn
The basic unit of three control valve composition LSTM, referred to as cell, the following figure is one unit of LSTM neural network Basic structure, wherein fnIt indicates to forget door, inIndicate input gate, onIndicate out gate, hnIndicate active cell state.LSTM mind Basic structure through network cell is as shown in Figure 2.
Forget door fnIt indicates are as follows:
fn=δ (Wf,xXn+Wf,yYn-1+bf)
Input gate inIt indicates are as follows:
in=δ (Wi,xXn+Wi,yYn-1+bi)
Out gate onIt indicates are as follows:
on=δ (Wo,xXn+Wo,yYn-1+bo)
Active cell exports hnIt indicates are as follows:
hn=hn-1fn+intanh(WcXn+UcYn-1+bc)
Active cell state:
yn=ontanh(hn)
Wherein, δ is sigmoid function, acts on and forgets door (forget gate), input gate (input gate) and defeated It gos out on (output gate), output is [0,1], and each value indicates whether corresponding partial information should pass through.0 value table Showing does not allow information to pass through, and the expression of 1 value allows all information to pass through;And tanh function is used for state and output;W is weight, such as Wf,x For the weight for forgeing the corresponding upper tense output information of door, b indicates biasing.
LSTM network architecture parameters are provided that
Time_steps is set as 12, i.e., each data are associated with the data at 12 time points before;
Prediction horizon is set as 5, that is, what is predicted is original load vector after five minutes;
Node in hidden layer is set as 10.Activation, that is, activation primitive is set as ' tanh';
Recurrent_activation be circulation step apply activation primitive using be set as ' hard_sigmoid';
Dropout is set as 0.2;
Batch_size is set as 128;
Optimizer, that is, optimizer is set as Adam optimizer;
Loss function setup is MAE.
C, calculating task management system
Calculating task management system is responsible for the deployment of container.It, can be defeated according to task after management system is connected to a task Enter scale and task is divided into the s one's share of expenses for a joint undertaking task for being suitable for running on node, packs into s container.Then it estimates single Maximum load information L' in the operational process of containermaxAnd running time T.
Then pass through the original load information L of each node in the following T time of LSTM model prediction of each nodei(t) it sets Share n node, current time t1, then have 0 < i≤n, t1<t<t1+T.Then s are selected and had both been able to satisfy container payload demand That is L'max+Li(t) it is max (L' that < 1 and maximum resource, which utilize,max+Li(t)) node.Finally s container is deployed to above-mentioned It is run on node, and obtains subtask result after the completion of operation and be merged into final result.
Referring to Fig. 3, container dispositions method is specific as follows:
Calculating task is divided into S one's share of expenses for a joint undertaking task by S1, calculating task management system, is packed respectively into container, according to The maximum load information Lmax and running time T in task configuration file estimation single container operational process that family is submitted;
S2, calculating task management system initialize node, then by current in node load predictive system T time The original load information L of node judges in T time whether present node surplus resources meet needs, and whether maximum resource It utilizes, enabled node list is added if meeting needs and maximum resource utilizes, if having had S in enabled node list A node, container is distributed on enabled node, terminates deployment;
S3, when in T time present node surplus resources be unsatisfactory for need or without maximum resource utilize or available section When there is no S node in point list, to node number plus 1 and return step S2;
If S4, having traversed all nodes, container is distributed on enabled node, terminates deployment.
The above content is merely illustrative of the invention's technical idea, and this does not limit the scope of protection of the present invention, all to press According to technical idea proposed by the present invention, any changes made on the basis of the technical scheme each falls within claims of the present invention Protection scope within.

Claims (8)

1. a kind of edge calculations platform container deployment system based on load estimation characterized by comprising
Multiple calculate nodes and a central node, in each calculate node there are the task of edge calculations platform distribution it Outer long-term running task, referred to as ancestral task, load are known as original load, and central node is dedicated;
An original load monitoring system is carried in each calculate node, each original load monitoring system is respectively at node load Forecasting system connection, is uploaded to for the occupation condition of collector node ancestral task, and by node load forecasting system Central server, for prediction model training and node load prediction;
It is equipped with node load forecasting system and calculating task management system on central node, is arranged in node load forecasting system There are the LSTM model of corresponding each calculate node, the original load information of node load forecasting system receiving node and the result by prediction It is sent to calculating task management system;
Calculating task management system is responsible for the deployment of container, calculating task management system based on the received information feedback node number, Task time gives node load forecasting system, and issues container to available calculate node.
2. system according to claim 1, which is characterized in that original load monitoring system is not having calculating task operation An original load information is run on each node and collects finger daemon, and original load information is collected finger daemon every 5 minutes The original load information of a minor node is collected, original load vector L (t)=(φ of one 4 dimension is constituted1234), it deposits Entering in database, the original load information of operation one uploads finger daemon on each node for not having calculating task to run, Original load information uploads finger daemon every 60 minutes and the original load vector in 60 minutes is read and uploaded from database To central server.
3. system according to claim 2, which is characterized in that original load information include CPU usage, GPU utilization rate, Memory residual capacity and network usage.
4. system according to claim 1, which is characterized in that the network input layer of LSTM model is three-dimensional data [samples, time_steps, features], Samples are training samples number;Time_steps is time step, i.e., often A data are related with the input data at how many a time points before;Features is characterized value, i.e. load vector L (t).
5. system according to claim 4, which is characterized in that use the original after R original load information prediction k times Beginning load information;Prediction horizon is set as k, the integral multiple that k is time interval s=5 minutes;F indicates the model to be solved, L (t+k) Indicate the original load information of t+k moment node, predictive behavior is as follows:
L (t+k)=f (L (t-R+1), L (t-R+2), L, L (t-1), L (t))
Wherein, the inputoutput pair of f function constitutes outputting and inputting for the data set of needs;One LSTM input and output number According to right are as follows:
<(L(t-R+1),L(t-R+2),L,L(t-1),L(t)),L(t+k)>。
6. system according to claim 1, which is characterized in that calculating task management system is used to be responsible for the deployment of container, After calculating task management system is connected to a task, task is divided into according to task input size and is suitable for running on node S one's share of expenses for a joint undertaking task, pack into s container;Estimate maximum load information L' in single container operational processmaxWhen with operation Between T;Then pass through the original load information L of each node in the following T time of LSTM model prediction of each nodei(t), if it is shared N node, current time t1, then have 0 < i≤n, t1<t<t1+T;Then s are selected and had both been able to satisfy container payload demand i.e. L'max+Li(t) it is max (L' that < 1 and maximum resource, which utilize,max+Li(t)) node is enabled node;Finally by s container It is deployed on enabled node and runs respectively, and obtain subtask result after the completion of operation and be merged into final result.
7. according to claim 1 to the work of the edge calculations platform container deployment system described in any one of 6 based on load estimation Make method, which comprises the following steps:
Calculating task is divided into S one's share of expenses for a joint undertaking task by S1, calculating task management system, is packed into container, is mentioned respectively according to user Maximum load information Lmax and running time T in the task configuration file estimation single container operational process of friendship;
S2, calculating task management system initialize node, then pass through present node in node load predictive system T time Original load information L judges whether present node surplus resources meet needs in T time, and whether maximum resource utilizes, Then enabled node list is added if meeting needs and maximum resource utilizes;
S3, when in T time present node surplus resources be unsatisfactory for need or without maximum resource utilize or available section point range When there is no S node in table, to node number plus 1 and return step S2;
If S4, having traversed all nodes, container is distributed on enabled node, terminates deployment.
8. the edge calculations platform container dispositions method and system according to claim 7 based on load estimation, feature It is, in step S2, if whether having there is S node in enabled node list, container is distributed on enabled node, terminates Deployment.
CN201910420328.1A 2019-05-20 2019-05-20 Load prediction-based edge computing platform container deployment method and system Active CN110231976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910420328.1A CN110231976B (en) 2019-05-20 2019-05-20 Load prediction-based edge computing platform container deployment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910420328.1A CN110231976B (en) 2019-05-20 2019-05-20 Load prediction-based edge computing platform container deployment method and system

Publications (2)

Publication Number Publication Date
CN110231976A true CN110231976A (en) 2019-09-13
CN110231976B CN110231976B (en) 2021-04-20

Family

ID=67861431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910420328.1A Active CN110231976B (en) 2019-05-20 2019-05-20 Load prediction-based edge computing platform container deployment method and system

Country Status (1)

Country Link
CN (1) CN110231976B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110618870A (en) * 2019-09-20 2019-12-27 广东浪潮大数据研究有限公司 Working method and device for deep learning training task
CN111459617A (en) * 2020-04-03 2020-07-28 南方电网科学研究院有限责任公司 Containerized application automatic allocation optimization system and method based on cloud platform
CN111488213A (en) * 2020-04-16 2020-08-04 中国工商银行股份有限公司 Container deployment method and device, electronic equipment and computer-readable storage medium
CN112969157A (en) * 2021-02-22 2021-06-15 重庆邮电大学 Network load balancing method for unmanned aerial vehicle
CN112995636A (en) * 2021-03-09 2021-06-18 浙江大学 360-degree virtual reality video transmission system based on edge calculation and active cache and parameter optimization method
WO2022067557A1 (en) * 2020-09-29 2022-04-07 西门子股份公司 Method and apparatus for designing edge computing solution, and computer-readable medium
WO2022267724A1 (en) * 2021-06-22 2022-12-29 International Business Machines Corporation Cognitive scheduler for kubernetes
US11953972B2 (en) 2022-04-06 2024-04-09 International Business Machines Corporation Selective privileged container augmentation

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11868812B2 (en) 2021-08-12 2024-01-09 International Business Machines Corporation Predictive scaling of container orchestration platforms

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860481A (en) * 2010-05-25 2010-10-13 北京邮电大学 Service transport method for distinguishing priority in MPLS-TP over OTN multi-layer network and device thereof
US20140095649A1 (en) * 2011-02-07 2014-04-03 Microsoft Corporation Proxy-based cache content distribution and affinity
CN106055379A (en) * 2015-04-09 2016-10-26 国际商业机器公司 Method and system for scheduling computational task
CN106844051A (en) * 2017-01-19 2017-06-13 河海大学 The loading commissions migration algorithm of optimised power consumption in a kind of edge calculations environment
CN108062561A (en) * 2017-12-05 2018-05-22 华南理工大学 A kind of short time data stream Forecasting Methodology based on long memory network model in short-term
CN108170529A (en) * 2017-12-26 2018-06-15 北京工业大学 A kind of cloud data center load predicting method based on shot and long term memory network
CN108363478A (en) * 2018-01-09 2018-08-03 北京大学 For wearable device deep learning application model load sharing system and method
CN108809695A (en) * 2018-04-28 2018-11-13 国网浙江省电力有限公司电力科学研究院 A kind of distribution uplink unloading strategy towards mobile edge calculations
CN108809723A (en) * 2018-06-14 2018-11-13 重庆邮电大学 A kind of unloading of Edge Server Joint Task and convolutional neural networks layer scheduling method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860481A (en) * 2010-05-25 2010-10-13 北京邮电大学 Service transport method for distinguishing priority in MPLS-TP over OTN multi-layer network and device thereof
US20140095649A1 (en) * 2011-02-07 2014-04-03 Microsoft Corporation Proxy-based cache content distribution and affinity
CN106055379A (en) * 2015-04-09 2016-10-26 国际商业机器公司 Method and system for scheduling computational task
CN106844051A (en) * 2017-01-19 2017-06-13 河海大学 The loading commissions migration algorithm of optimised power consumption in a kind of edge calculations environment
CN108062561A (en) * 2017-12-05 2018-05-22 华南理工大学 A kind of short time data stream Forecasting Methodology based on long memory network model in short-term
CN108170529A (en) * 2017-12-26 2018-06-15 北京工业大学 A kind of cloud data center load predicting method based on shot and long term memory network
CN108363478A (en) * 2018-01-09 2018-08-03 北京大学 For wearable device deep learning application model load sharing system and method
CN108809695A (en) * 2018-04-28 2018-11-13 国网浙江省电力有限公司电力科学研究院 A kind of distribution uplink unloading strategy towards mobile edge calculations
CN108809723A (en) * 2018-06-14 2018-11-13 重庆邮电大学 A kind of unloading of Edge Server Joint Task and convolutional neural networks layer scheduling method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIPING ZHANG, QIUHUA TANG, ZHENGJIA WU, FANG WANG: "Mathematical modeling and evolutionary generation of rule sets for", 《ENERGY》 *
冯鸣夏,伍卫国,邸德海: "基于负载感知和QoS的多中心作业调度算法", 《计算机技术与发展》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110618870B (en) * 2019-09-20 2021-11-19 广东浪潮大数据研究有限公司 Working method and device for deep learning training task
WO2021051713A1 (en) * 2019-09-20 2021-03-25 广东浪潮大数据研究有限公司 Working method and device for deep learning training task
CN110618870A (en) * 2019-09-20 2019-12-27 广东浪潮大数据研究有限公司 Working method and device for deep learning training task
CN111459617A (en) * 2020-04-03 2020-07-28 南方电网科学研究院有限责任公司 Containerized application automatic allocation optimization system and method based on cloud platform
CN111488213A (en) * 2020-04-16 2020-08-04 中国工商银行股份有限公司 Container deployment method and device, electronic equipment and computer-readable storage medium
CN111488213B (en) * 2020-04-16 2024-04-02 中国工商银行股份有限公司 Container deployment method and device, electronic equipment and computer readable storage medium
WO2022067557A1 (en) * 2020-09-29 2022-04-07 西门子股份公司 Method and apparatus for designing edge computing solution, and computer-readable medium
CN112969157A (en) * 2021-02-22 2021-06-15 重庆邮电大学 Network load balancing method for unmanned aerial vehicle
CN112995636A (en) * 2021-03-09 2021-06-18 浙江大学 360-degree virtual reality video transmission system based on edge calculation and active cache and parameter optimization method
CN112995636B (en) * 2021-03-09 2022-03-25 浙江大学 360-degree virtual reality video transmission system based on edge calculation and active cache and parameter optimization method
WO2022267724A1 (en) * 2021-06-22 2022-12-29 International Business Machines Corporation Cognitive scheduler for kubernetes
US11928503B2 (en) 2021-06-22 2024-03-12 International Business Machines Corporation Cognitive scheduler for Kubernetes
US11953972B2 (en) 2022-04-06 2024-04-09 International Business Machines Corporation Selective privileged container augmentation

Also Published As

Publication number Publication date
CN110231976B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN110231976A (en) A kind of edge calculations platform container dispositions method and system based on load estimation
Zhang et al. A hierarchical game framework for resource management in fog computing
CN113282368B (en) Edge computing resource scheduling method for substation inspection
Shi et al. Mean field game guided deep reinforcement learning for task placement in cooperative multiaccess edge computing
CN108900358A (en) Virtual network function dynamic migration method based on deepness belief network resource requirement prediction
CN106502792A (en) A kind of multi-tenant priority scheduling of resource method towards dissimilar load
Alkhalaileh et al. Data-intensive application scheduling on mobile edge cloud computing
CN106600058A (en) Prediction method for combinations of cloud manufacturing service quality of service (QoS)
CN109831522A (en) A kind of vehicle connection cloud and mist system dynamic resource Optimal Management System and method based on SMDP
Wu et al. Multi-agent DRL for joint completion delay and energy consumption with queuing theory in MEC-based IIoT
CN107404409A (en) Towards the container cloud elastic supply number of containers Forecasting Methodology and system of mutation load
CN105491329B (en) A kind of extensive monitoring video flow assemblage method based on streaming computing
CN113098711A (en) Power distribution Internet of things CPS (control system) management and control method and system based on cloud edge cooperation
CN113347027B (en) Virtual instance placement method facing network virtual twin
Shafik et al. Internet of things-based energy efficiency optimization model in fog smart cities
Xu et al. Computation offloading for energy and delay trade-offs with traffic flow prediction in edge computing-enabled iov
Huang et al. Performance modelling and analysis for IoT services
CN108319501B (en) Elastic resource supply method and system based on micro-service gateway
Qin et al. Optimal workload allocation for edge computing network using application prediction
CN106844175B (en) A kind of cloud platform method for planning capacity based on machine learning
Zhang et al. An approximation of the customer waiting time for online restaurants owning delivery system
Liu et al. Surface roughness prediction method of titanium alloy milling based on CDH platform
Yan et al. Service caching for meteorological emergency decision-making in cloud-edge computing
Cui et al. Multi-Agent Reinforcement Learning Based Cooperative Multitype Task Offloading Strategy for Internet of Vehicles in B5G/6G Network
Hu et al. Effective cross-region courier-displacement for instant delivery via reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant