CN110850957A - Scheduling method for reducing system power consumption through dormancy in edge computing scene - Google Patents
Scheduling method for reducing system power consumption through dormancy in edge computing scene Download PDFInfo
- Publication number
- CN110850957A CN110850957A CN201911099109.4A CN201911099109A CN110850957A CN 110850957 A CN110850957 A CN 110850957A CN 201911099109 A CN201911099109 A CN 201911099109A CN 110850957 A CN110850957 A CN 110850957A
- Authority
- CN
- China
- Prior art keywords
- task
- server
- delay
- slave
- power consumption
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/329—Power saving characterised by the action undertaken by task scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3296—Power saving characterised by the action undertaken by lowering the supply or operating voltage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4418—Suspend and resume; Hibernate and awake
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Power Sources (AREA)
Abstract
The invention provides a power consumption-saving scheduling algorithm in an edge scene, and the endurance of equipment in the edge scene is optimized. The method is different from other edge computing resource scheduling in that a dormancy mechanism of an edge server is added, and the power consumption of the edge server is further reduced through task scheduling. The method divides the edge computing server into a main server and a slave server, wherein the main server is used for receiving data and processing the data, the slave server is used for processing the data, and when the slave server has no data processing, the slave server enters a dormant state. The method proposes two strategies: radical strategies: for preventing delay growth in the case of priority for computing power; a conservative strategy for achieving the lowest power consumption in a tolerable delay state. Finally, the goal of optimizing latency and power consumption is achieved through load balancing of multiple servers.
Description
Technical Field
The invention belongs to the field of edge computing, and aims to remarkably reduce the energy consumption of an edge server under the condition of not remarkably increasing task delay.
Background
Edge computing is currently the most promising low-latency computing solution. The traditional cloud computing method has the advantages that massive information is processed in a centralized mode through centralized large-capacity cluster equipment, all computing processes are processed in a centralized mode in a cloud end, data are collected and shared conveniently, and a series of big data applications are generated. Although cloud computing has almost unlimited computing power, it is necessary to bear a lot of communication costs to utilize the computing power of cloud computing, and delay is inevitably caused. The centralized processing method inevitably determines that the data of the user needs to be uploaded to the cloud for processing, so that the delay is difficult to reduce. However, with the development of the times, blowout occurs in real-time application, and the delay requirement is higher and higher. The edge computing architecture is proposed to solve the problem of large cloud computing delay. Specifically, the edge computing processes tasks with high requirements on delay at a near-user end, and places services which can only be operated on a cloud computing platform into an edge server. However, the edge server has limited computing power and limited energy consumption compared with the cloud computing, which is the biggest bottleneck.
In the field of edge computing, two values of delay and power consumption are mainly optimized as targets.
Disclosure of Invention
The main objective of this patent is to reduce the power consumption of edge calculations. The method is characterized in that a dormancy mechanism of an edge server is added, resource scheduling is carried out according to the parameter, and power consumption of the edge server is further reduced through task scheduling.
The patent is mainly oriented to 4 inter-object scheduling in edge computing. As shown in fig. 1, this is a complete edge computing framework, which includes 4 objects of cloud, proxy, edge node, and sensor. Tasks are issued by the sensors and may be processed at the cloud or edge servers within the edge nodes, and the proxy servers are used to transmit data between the edge servers and the cloud servers.
The traditional task scheduling method sequentially judges whether each computing device can process the task or not on a transmission link from a user side to a cloud end, if so, the task is processed, and if not, the task is uploaded to the next computing device until the task is finally uploaded to the cloud end to be finished.
Different from the traditional task scheduling method, each edge node is divided into two servers in a master-slave mode from one server, the master server is responsible for scheduling tasks and executing the tasks, the slave servers are responsible for executing the tasks, when the tasks do not need to be executed, the slave servers enter a dormant state (which is the root of power consumption reduction), and when the master server is in an idle state and the slave servers are in a busy state, the master servers and the slave servers switch roles. A sleep state is a very low power consumption power state, and a computing device in this state usually only retains a memory and related devices for waking up, and commonly used wake-up devices include a network card, a mouse, a keyboard, a power button, a screen switch, and the like.
To sum up, what we want to do is to reduce power consumption with as little delay as possible.
We will now describe the implementation of the method in more detail.
First, a power consumption model of the edge server is introduced. As shown in equation 1. PjFor real-time power consumption of the server, PaFor active power consumption, PsPower is consumed for hibernation.
We fit the power consumption versus load using a linear function. As shown in equation 2, the power consumption of the active state is a linear function, where k and b are determined by the full power consumption and the no-load power consumption of the device.
Pa=kMc+b (2)
Then the delay model, since delay is another indicator that looks well in the edge calculation.
TaRepresenting the case of a delay, including the transmission delay TtAnd calculating the delay Tc. As shown in equation 3.
Ta=Tt+Tc(3)
Wherein the transmission is delayed by TtThe number of node hops J participating in the task transmissionnIs in direct proportion. The transmission delay of a single node is delayed by a link T7And data transmission delay TmSum, link delay T7Depending on the transmission medium and the transmission distance, the data transmission delay is inversely proportional to the amount of data and to the link bandwidth.
Tt=Jn(T7+Tm) (4)
The calculated delay is shown as ToMIPS, D required to complete a single tasksProcessing capability MIPS, T for processing node where task is locatednThe number of tasks of the current node. Since the number of tasks running simultaneously on a node is variable, the processing resources allocated to each task are equally divided. So that each T needs to be accumulatednDuration of time, i.e. Ts。
Each edge server is divided into two servers in a master-slave mode, wherein the master server is responsible for scheduling tasks and executing the tasks, the slave servers are responsible for executing the tasks, and the slave servers enter a dormant state when the tasks are not required to be executed (which is the root of reducing power consumption). What we want to do is to reduce power consumption with as little delay as possible.
We propose both aggressive and conservative strategies. The aggressive strategy is to prevent delay growth under the condition of priority guarantee of computing capacity, and the conservative strategy is to realize the lowest power consumption under the condition of tolerable delay. We illustrate these two strategies separately below.
In order to better utilize the dormant state of the edge server, the edge server is divided into two independent servers which can respectively enter the dormant state, wherein one main server can be used for receiving tasks and processing tasks, a slave server can be used for processing the tasks, and the slave server is shut down and enters the dormant state when no task can be processed by the slave server.
We assume that knowing the computation amount of tasks and the computation power of the master and slave servers in advance, it is possible to match the appropriate computation power to the number of tasks accurately, which is the most ideal state for load balancing.
The central idea of the radical strategy is to start the slave servers as much as possible to ensure that the delay is not increased. Since the overall computing power of the system is not increased or reduced, starting from the server can maintain the computing performance, and guaranteeing the computing performance can guarantee that the delay is slightly increased compared with the original strategy.
When the total task number is 0, no task is received at the moment, namely the main server is in idle power consumption and waits for receiving the task, when the slave server is in sleep power consumption, a task is received at the moment, firstly, whether the task is processed by preferentially entering the main server exceeds the maximum tolerance delay of the task or not is judged, if not, the task enters the main server for processing, and the number of the tasks which are executed by the corresponding main server is added with 1; if the maximum tolerance delay of the task is exceeded, the slave server is started to process, whether the maximum tolerance delay of the task is exceeded or not is further judged, if yes, the task is uploaded to the cloud terminal to be processed, and if not, the slave server is started to process;
when the total task number is 1, namely the main server is in full load power consumption, if a task is newly added, the task is preferentially transferred to the slave server, whether the maximum tolerant delay of the task is exceeded or not is judged, if the maximum tolerant delay of the task is not exceeded, the slave server is enabled to enter an operating state, and the task number T of the slave server is less than the task number T of the slave servereEqual to 1; if the maximum tolerance delay of the task is exceeded, entering the main server for processing, and further judging whether the maximum tolerance delay of the task is exceeded or not when the maximum tolerance delay of the task is exceeded, if the maximum tolerance delay of the task is exceeded, uploading the task to a cloud end for processing, and if the maximum tolerance delay of the task is not exceeded, processing in the main server; if a new task is added, sequentially judging where the task is finally left according to the sequencing result of the equipment;
wherein the equipment sequencing process comprises the following steps: according to the processing capacity M of the main serveroSlave server processing capacity MeNumber of tasks being performed by the primary server ToNumber of tasks being performed from the server TeCalculating a ranking index C as shown in formula (1); if the ratio C is larger than 1, sequentially setting the sequence as a slave server, a master server and a cloud end, otherwise, sequentially setting the sequence as the master server, the slave server and the cloud end;
on the other hand, conservative strategies, which aim to achieve maximum power consumption savings. And migrating the tasks to be executed from the server as little as possible according to the maximum delay of the tasks so as to achieve the lowest power consumption in a tolerable state.
Regardless of the number of the tasks being executed by each server, when a new task is added, firstly, whether any task running reaches the maximum tolerance delay or not when all the tasks run in parallel on the main server is always calculated, if the task running does not reach the maximum tolerance delay, the new task is placed in the main server, if the task running exceeds the maximum tolerance delay, the new task is placed in the slave server, and whether any task running reaches the maximum tolerance delay or not when all the tasks run in parallel on the slave server is further judged, if the task running does not reach the maximum tolerance delay, the new task is placed in the slave server, and if the task running does not reach the maximum tolerance delay.
If the master server finishes the task before the slave server, the master server is in an idle state at the moment, and the slave server is in a busy state. The master server and the slave server should switch roles, the original slave server serves as a new master server to receive tasks, and the original master server enters a dormant state.
Advantageous effects
The invention realizes the goal of optimizing delay and power consumption by two designed strategies.
Drawings
In order to make the purpose of the present invention more comprehensible, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a diagram of an edge computing architecture according to the present invention;
FIG. 2 is a diagram of scheduling policy method steps;
FIG. 3 is parameters used in simulation;
FIG. 4 is a comparison of power consumption for a single node;
FIG. 5 is a comparison of total system power consumption;
FIG. 6 is a comparison of the average delay of the system;
Detailed Description
The following description will take the aggressive strategy as an example
Step 1, starting a master server, sleeping a slave server, and starting the master server to receive a task.
And 2, when the main server receives the task, placing the task execution position according to the calculation result of the method.
And 3, collecting the number of tasks currently executed by the master server and the slave server, the maximum tolerance delay and the processing capacity. Number of tasks T being performed by the primary serveroNumber of tasks being performed from the server TeProcessing capacity M of the host serveroProcessing capability M with slave servere。
And 4, when the total task number is 0, no task is received at this time, the master server is in idle power consumption, waits for receiving the task, and the slave server is in dormant power consumption.
When the total task number is 0, no task is received at the moment, namely the main server is in idle power consumption and waits for receiving the task, when the slave server is in sleep power consumption, a task is received at the moment, firstly, whether the task is processed by preferentially entering the main server exceeds the maximum tolerance delay of the task or not is judged, if not, the task enters the main server for processing, and the number of the tasks which are executed by the corresponding main server is added with 1; if the maximum tolerance delay of the task is exceeded, the slave server is started to process, whether the maximum tolerance delay of the task is exceeded or not is further judged, if yes, the task is uploaded to the cloud terminal to be processed, and if not, the slave server is started to process;
when the total number of tasks is 1, i.e. the main server is in full power consumption, if it is, againIf a new task is added, the task is preferentially migrated to the slave server, whether the maximum tolerance delay of the task is exceeded is judged, if the maximum tolerance delay of the task is not exceeded, the slave server is enabled to enter a running state, and the number T of the tasks of the slave server iseEqual to 1; if the maximum tolerance delay of the task is exceeded, entering the main server for processing, and further judging whether the maximum tolerance delay of the task is exceeded or not when the maximum tolerance delay of the task is exceeded, if the maximum tolerance delay of the task is exceeded, uploading the task to a cloud end for processing, and if the maximum tolerance delay of the task is not exceeded, processing in the main server; if a new task is added, sequentially judging where the task is finally left according to the sequencing result of the equipment;
wherein the equipment sequencing process comprises the following steps: according to the processing capacity M of the main serveroSlave server processing capacity MeNumber of tasks being performed by the primary server ToNumber of tasks being performed from the server TeCalculating a ranking index C as shown in formula (1); if the ratio C is larger than 1, sequentially setting the sequence as a slave server, a master server and a cloud end, otherwise, sequentially setting the sequence as the master server, the slave server and the cloud end;
in particular, if a task is preferentially assigned to a slave server, the master server may be in an idle state and the slave server may be in a busy state. The master server and the slave server should switch roles, the original slave server serves as a new master server to receive tasks, and the original master server enters a dormant state.
In the following we use IFOGSIM to simulate the effectiveness of the method.
As shown in fig. 3, we use three different edge processing servers. In order to reduce the power consumption of the edge node, the server originally using the Config 1 is divided into two servers, namely a Config 2 server and a Config 3 server, which are a group. To ensure fairness we ensure that the sum of the forces of Config 2 and Config 3 equals Config 1.
We refer to the combination of servers with only one Config 1 configuration as single devices, and the combination of each of Config 2 and Config 3 as multiple devices. We set the master server to Config 2 and the slave server to Config 3. The master server is used for participating in task scheduling and execution, the slave server only executes tasks, and the slave server enters a dormant state when being idle. Because sleep power consumption is processor independent, sleep power consumption is the same for the three platforms. In general, the power consumption in the sleep state is extremely low, and in the case of an ordinary PC, this power consumption is about 4.5 w. The subsequent experimental part will be based on this configuration.
The experimental results are based on the results of aggressive strategies, as shown in figures 4, 5, 6.
As shown in fig. 4, the method described in this specification has some optimization of power consumption for each individual server.
As shown in fig. 5, we performed experiments at different task generation densities, where mean represents the mean of the interval time of task issuance, and the vertical axis is the total power consumption of edge nodes in this experimental system. The larger mean represents the larger interval time and the smaller task total amount, and the experimental result shows that in the experiment with mean of 5, the power consumption of multiple devices is reduced by 3.3% compared with that of single devices. In the mean 6 experiment, the power consumption decreased by 18.5%. In the experiment with mean of 7, the power consumption decreased by 26.3%. In the experiment with mean of 8, the power consumption decreased by 27.9%. It can be concluded that the advantage of multiple devices schemes gets smaller and smaller as the production task is dense. The method provided by the paper is obviously optimized for power consumption under the condition of low task density, and is less optimized for power consumption under the condition of high task density. Therefore, it can be seen from the figure that the power consumption of the method is reduced, and further, the smaller the task amount is, the greater the efficiency reduction is.
As shown in fig. 6, the experimental results show that in the experiment where mean is 5, the delay of multiple devices is increased by 0.04% compared to the delay of single devices. In the experiment with mean of 6, the delay rose by 5.25%. In the experiment with mean of 7, the delay rose by 16.69%. In the experiment with mean of 8, the delay rose by 17.07%. The method has certain side effect on delay, but in different groups of experiments, the power consumption saving proportion is respectively larger than the delay increasing proportion, and the method still has certain reference value in some occasions with obvious power consumption requirements.
Claims (1)
1. A scheduling method for reducing system power consumption through dormancy in an edge computing scene is characterized in that: each edge node is divided into two servers in a master-slave mode from one server, the master server is responsible for scheduling tasks and executing the tasks, the slave server is responsible for executing the tasks, when the tasks do not need to be executed, the slave server enters a dormant state, and when the master server is in an idle state and the slave server is in a busy state, the master server and the slave server switch roles; the method specifically comprises two strategies:
radical strategies: the method is used for preventing delay from increasing under the condition of preferentially guaranteeing the computing capacity, and specifically comprises the following steps:
when the total task number is 0, no task is received at the moment, namely the main server is in idle power consumption and waits for receiving the task, when the slave server is in sleep power consumption, a task is received at the moment, firstly, whether the task is processed by preferentially entering the main server exceeds the maximum tolerance delay of the task or not is judged, if not, the task enters the main server for processing, and the number of the tasks which are executed by the corresponding main server is added with 1; if the maximum tolerance delay of the task is exceeded, the slave server is started to process, whether the maximum tolerance delay of the task is exceeded or not is further judged, if yes, the task is uploaded to the cloud terminal to be processed, and if not, the slave server is started to process;
when the total task number is 1, namely the main server is in full load power consumption, if a task is newly added, the task is preferentially transferred to the slave server, whether the maximum tolerant delay of the task is exceeded or not is judged, if the maximum tolerant delay of the task is not exceeded, the slave server is enabled to enter an operating state, and the task number T of the slave server is less than the task number T of the slave servereEqual to 1; if the maximum tolerance delay of the task is exceeded, the main server is started to process, whether the maximum tolerance delay of the task is exceeded or not is further judged, if the maximum tolerance delay of the task is exceeded, the task is uploaded to the cloud end to be processed, and if the maximum tolerance delay of the task is exceeded, the task is uploaded to the cloud end to be processedIf not, processing at the main server; if a new task is added, sequentially judging where the task is finally left according to the sequencing result of the equipment;
wherein the equipment sequencing process comprises the following steps: according to the processing capacity M of the main serveroSlave server processing capacity MeNumber of tasks being performed by the primary server ToNumber of tasks being performed from the server TeCalculating a ranking index C as shown in formula (1); if the ratio C is larger than 1, sequentially setting the sequence as a slave server, a master server and a cloud end, otherwise, sequentially setting the sequence as the master server, the slave server and the cloud end;
a conservative strategy, which is used to realize the lowest power consumption in a tolerable delay state, specifically:
regardless of the number of the tasks being executed by each server, when a new task is added, firstly, whether any task running reaches the maximum tolerance delay or not when all the tasks run in parallel on the main server is always calculated, if the task running does not reach the maximum tolerance delay, the new task is placed in the main server, if the task running exceeds the maximum tolerance delay, the new task is placed in the slave server, and whether any task running reaches the maximum tolerance delay or not when all the tasks run in parallel on the slave server is further judged, if the task running does not reach the maximum tolerance delay, the new task is placed in the slave server, and if the task running does not reach the maximum tolerance delay.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911099109.4A CN110850957B (en) | 2019-11-12 | 2019-11-12 | Scheduling method for reducing system power consumption through dormancy in edge computing scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911099109.4A CN110850957B (en) | 2019-11-12 | 2019-11-12 | Scheduling method for reducing system power consumption through dormancy in edge computing scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110850957A true CN110850957A (en) | 2020-02-28 |
CN110850957B CN110850957B (en) | 2021-04-30 |
Family
ID=69601480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911099109.4A Active CN110850957B (en) | 2019-11-12 | 2019-11-12 | Scheduling method for reducing system power consumption through dormancy in edge computing scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110850957B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112543481A (en) * | 2020-11-23 | 2021-03-23 | 中国联合网络通信集团有限公司 | Method, device and system for balancing calculation force load of edge node |
CN113268135A (en) * | 2021-04-19 | 2021-08-17 | 瑞芯微电子股份有限公司 | Low-power-consumption standby method and device |
WO2024021888A1 (en) * | 2022-07-26 | 2024-02-01 | 中兴通讯股份有限公司 | Processing method for computing task, and first device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109121151A (en) * | 2018-11-01 | 2019-01-01 | 南京邮电大学 | Distributed discharging method under the integrated mobile edge calculations of cellulor |
CN109495929A (en) * | 2017-09-12 | 2019-03-19 | 华为技术有限公司 | A kind of method for processing business, mobile edge calculations equipment and the network equipment |
CN110109745A (en) * | 2019-05-15 | 2019-08-09 | 华南理工大学 | A kind of task cooperation on-line scheduling method for edge calculations environment |
-
2019
- 2019-11-12 CN CN201911099109.4A patent/CN110850957B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109495929A (en) * | 2017-09-12 | 2019-03-19 | 华为技术有限公司 | A kind of method for processing business, mobile edge calculations equipment and the network equipment |
CN109121151A (en) * | 2018-11-01 | 2019-01-01 | 南京邮电大学 | Distributed discharging method under the integrated mobile edge calculations of cellulor |
CN110109745A (en) * | 2019-05-15 | 2019-08-09 | 华南理工大学 | A kind of task cooperation on-line scheduling method for edge calculations environment |
Non-Patent Citations (4)
Title |
---|
CONSTANDINOS X. MAVROMOUSTAKIS等: "Socially Oriented Edge Computing for Energy Awareness in IoT Architectures", 《IEEE COMMUNICATIONS MAGAZINE》 * |
HUY TRINH等: "Energy-Aware Mobile Edge Computing and Routing for Low-Latency Visual Data Processing", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
JUAN FANG ET AL: "Latency aware online tasks scheduling policy for edge computing", 《JOURNAL OF PHYSICS: CONFERENCE SERIES》 * |
LINGFANG GAO: "Joint Computation Fooloading and Priortized Scheduling in MEC", 《HTTPS://SCHOLARWORKS.SJSU.EDU/ETD_PROJECTS/615》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112543481A (en) * | 2020-11-23 | 2021-03-23 | 中国联合网络通信集团有限公司 | Method, device and system for balancing calculation force load of edge node |
CN112543481B (en) * | 2020-11-23 | 2023-09-15 | 中国联合网络通信集团有限公司 | Method, device and system for balancing computing force load of edge node |
CN113268135A (en) * | 2021-04-19 | 2021-08-17 | 瑞芯微电子股份有限公司 | Low-power-consumption standby method and device |
WO2024021888A1 (en) * | 2022-07-26 | 2024-02-01 | 中兴通讯股份有限公司 | Processing method for computing task, and first device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110850957B (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110850957B (en) | Scheduling method for reducing system power consumption through dormancy in edge computing scene | |
Changtian et al. | Energy-aware genetic algorithms for task scheduling in cloud computing | |
CN102622273B (en) | Self-learning load prediction based cluster on-demand starting method | |
CN111611062B (en) | Cloud-edge collaborative hierarchical computing method and cloud-edge collaborative hierarchical computing system | |
CN105868004B (en) | Scheduling method and scheduling device of service system based on cloud computing | |
CN113993218A (en) | Multi-agent DRL-based cooperative unloading and resource allocation method under MEC architecture | |
CN106775949B (en) | Virtual machine online migration optimization method capable of sensing composite application characteristics and network bandwidth | |
CN112214301B (en) | Smart city-oriented dynamic calculation migration method and device based on user preference | |
CN110266512A (en) | A kind of fast resource configuration method of mobile edge calculations migratory system | |
Wu et al. | Meccas: Collaborative storage algorithm based on alternating direction method of multipliers on mobile edge cloud | |
CN105847385B (en) | A kind of cloud computing platform dispatching method of virtual machine based on operation duration | |
CN107132903B (en) | Energy-saving management implementation method, device and network equipment | |
He et al. | DROI: Energy-efficient virtual network embedding algorithm based on dynamic regions of interest | |
Xu et al. | Computation offloading algorithm for cloud robot based on improved game theory | |
Jin et al. | A virtual machine scheduling strategy with a speed switch and a multi-sleep mode in cloud data centers | |
Wang et al. | Energy-efficient collaborative optimization for VM scheduling in cloud computing | |
Aiwen et al. | Energy-optimal task offloading algorithm of resources cooperation in mobile edge computing | |
CN110308991B (en) | Data center energy-saving optimization method and system based on random tasks | |
CN113296953B (en) | Distributed computing architecture, method and device of cloud side heterogeneous edge computing network | |
CN114301911B (en) | Task management method and system based on edge-to-edge coordination | |
CN113342462B (en) | Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy | |
Kliazovich et al. | Simulating communication processes in energy-efficient cloud computing systems | |
CN112764883A (en) | Energy management method of cloud desktop system based on software definition | |
Wang et al. | Batch arrival based performance evaluation of a VM scheduling strategy in cloud computing | |
Jin et al. | A hybrid energy saving strategy with LPI and ALR for energy-efficient Ethernet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |