CN106550036B - One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method - Google Patents

One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method Download PDF

Info

Publication number
CN106550036B
CN106550036B CN201610966411.5A CN201610966411A CN106550036B CN 106550036 B CN106550036 B CN 106550036B CN 201610966411 A CN201610966411 A CN 201610966411A CN 106550036 B CN106550036 B CN 106550036B
Authority
CN
China
Prior art keywords
scheduling
server
schedule
task
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610966411.5A
Other languages
Chinese (zh)
Other versions
CN106550036A (en
Inventor
陈铭松
吴庭明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN201610966411.5A priority Critical patent/CN106550036B/en
Publication of CN106550036A publication Critical patent/CN106550036A/en
Application granted granted Critical
Publication of CN106550036B publication Critical patent/CN106550036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Abstract

The invention discloses one kind towards energy-efficient heuristic cloud computing resources distribution and dispatching method, this method comprises: being that the task requests that all users submit generate a task initial schedule first with first-fit algorithm FF or improved optimal adaptation algorithm MBFD;Secondly, proposing a kind of heuristic mutation operations optimization method, iterative optimization initial schedule is final to obtain the scheduling that significantly reduced consumption of data center, improve user's request receptance.The task requests that the present invention can be issued effectively for cloud computing user search out the approximate optimal solution of resource allocation scheduling, effectively reduce consumption of data center.

Description

One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method
Technical field
The invention belongs to computer field, it is absorbed in cloud computing resources distribution and Mission Scheduling, more particularly to a kind of Towards energy-efficient heuristic cloud computing resources distribution and dispatching method.
Background technique
There are considerable cloud computing resources distribution and dispatching method at present, specific as follows:
Using dynamic voltage frequency regulation technology (Dynamic Voltage and Frequency Scaling, abbreviation ) and dynamic power management technology (Dynamic Power Management, abridge DPM) DVFS.They are all a kind of hardware skills Art.The processor of part server supports DVFS technology at present, it refers to when processor utilization is lower, reduces processor Voltage or frequency, reduce the performance of processor, keep it in low power consumpting state;And when the load of processor is higher, it mentions The voltage or frequency of high disposal device, improve the performance of processor, and at this point, the power consumption of processor is also relatively high.By this Load for processor carries out the dynamic mode for adjusting processor voltage or frequency, realizes the reduction of energy consumption.DPM technology is Refer to when server component does not load, it is switched to dormant state or closed state, to realize energy conservation.But at present very The multi-vendor stabilization to guarantee server performance, is not equipped with these technologies, therefore, in the enterprise for being assembled with common server Or in the private clound inside group, these technologies can not be applied.
Energy consumption is reduced by server Dynamic Integration.When lower due to server resource utilization rate, can still consume compared with High energy consumption, therefore server resource utilization rate can be improved, energy resource efficiency is improved, energy consumption is reduced.Virtualization technology Appearance allows virtual machine to carry out dynamic migration, and the integration of server may be implemented using this technology, low-load is serviced On virtual machine (vm) migration to other low-load servers on device, idling can be carried to server closing, and achievement unit sub-service in this way Device resource utilization of being engaged in improves.Although server resource utilization rate can be improved in this server Dynamic Integration technology, empty Quasi- machine migration necessarily will affect user experience, and such as the extension of task execution time, therefore this method is not appropriate for being applied to one The higher scene of a little time-constrains.
Summary of the invention
The object of the present invention is to provide it is a kind of it is didactic towards energy-efficient cloud resource for giving dispatching method, this method energy Enough effective task requests issued for cloud computing user search out the approximate optimal solution of resource allocation scheduling, effectively reduce number According to power consumption.
Realizing the specific technical solution of the object of the invention is:
One kind is that this method includes in detail below towards energy-efficient heuristic cloud computing resources distribution and dispatching method, feature Step:
Step 1: initial phase
It the use of improved optimal adaptation method MBFD or for the first time adaptive method FF is cloud to accelerate the generation of task schedule It calculates all user tasks that data center receives and generates initial schedule.
Step 2: the optimizing phase
Several mutually independent schedule optimizers are instantiated, each schedule optimizer is initial with what is generated in step 1 respectively It is scheduling to input and independently carries out local optimum, all independent schedule optimizer cooperations generate the approximation in global search space Optimal scheduling;Wherein, it specifically includes:
(1) single schedule optimizer optimization: single schedule optimizer is based on the initial schedule inputted, iterative benefit Local optimum is carried out to it with task choosing, server grouping and scheduling three operations of evolution;
(2) more schedule optimizer cooperative optimizations: after each schedule optimizer generates the scheduling of its local optimum, all It relatively and selects energy consumption minimum and the highest scheduling of task receptance in the scheduling that schedule optimizer generates, is adjusted as initial Degree carries out the operation of step (1);After each schedule optimizer generates the scheduling of its local optimum, produced in all schedule optimizers It relatively and selects energy consumption minimum and the highest scheduling of task receptance in raw scheduling, as initial schedule, then is walked Suddenly the operation of (1);In cycles, when operating procedure (1) number reaches 10-15 times, the approximation in global search space is obtained Optimal scheduling.
The generation initial schedule, that is, the original allocation for generating a physical server to user task map.
The list schedule optimizer, according to initial schedule, determines that data center is each use after receiving initial schedule The server resource of family task distribution, meanwhile, each server and the distributing to it of the task constitute the part on the server Scheduling.
The task choosing operation is that each server selects a goal task using initial schedule: in each clothes Be engaged on device, be Overlapped Execution time on each task computation it and the server between all other task and, there is minimum The task of Overlapped Execution time sum is selected as the goal task of the server.
Data center server is carried out random grouping two-by-two, obtains two different services by the server division operation Device group;Using the initial schedule of input step 1, the local scheduling information in every server group on two servers constitutes the service The local scheduling of device group.
The scheduling, which is developed, to be operated, and in every server group, execution evolution operation generates 4 different filial generations and dispatches;Often A filial generation scheduling generates as follows:
(1) it is dispatched the initial schedule of step 1 as first filial generation;
(2) goal task of first server is moved on second server, and modified corresponding in initial schedule Second filial generation scheduling is obtained in the local scheduling information of the operation;
(3) goal task of second server is moved on first server, and modified corresponding in initial schedule Third filial generation scheduling is obtained in the local scheduling information of the operation;
Goal task is exchanged between (4) two servers, and modifies the local scheduling for corresponding to the operation in initial schedule Information obtains the 4th filial generation scheduling;
After generating 4 filial generation scheduling, single schedule optimizer emulates each filial generation scheduling, calculates consumption information and appoints Business receptance, and select energy consumption minimum and the original initial scheduling of the highest filial generation scheduling substitution of task receptance.
The iterative utilization task choosing, server grouping and scheduling three number of operations of evolution are 30-50 times.
List schedule optimizer of the present invention is an iterative operation, it selects energy consumption minimum, task connects after an iteration By the highest filial generation scheduling of rate as new initial schedule, operation that iterative task selection, server are grouped and scheduling is developed, until Certain the number of iterations terminates.
The more schedule optimizer cooperative optimizations of the present invention need each independent progress randomized grouping behaviour of schedule optimizer Make, just guarantees to search for approximate optimal schedule in global space in this way.
The beneficial effects of the present invention are: the present invention can be in the pact of cloud computation data center fixed physical server resource Approximate optimal schedule is searched under the conditions of beam, is greatly reduced energy consumption, is improved user's request receptance.
Detailed description of the invention
Fig. 1 is flow chart of the present invention;
Fig. 2 is that server of the present invention is grouped schematic diagram;
Fig. 3 is present invention scheduling evolution operational flowchart;
Fig. 4 is that the single schedule optimizer of the present invention optimizes implementation procedure figure.
Specific embodiment
In conjunction with following specific embodiments and attached drawing, the present invention is described in further detail.
The present invention is a kind of towards energy-efficient heuristic cloud resource distribution and dispatching method, the task requests submitted for user Generate energy-efficient scheduling.Initial schedule is generated by initial method first, then by evolutionary optimization process to initial schedule Optimize: for server selection target task, server being grouped at random, and to each grouping generate local scheduling after In generation, replaces the scheduling in the original grouping by emulation selection optimal partial scheduling.Multiple optimization actuators work independently, and find Global optimum's scheduling.
The present invention includes the following steps that detailed process is as shown in Figure 1:
Step 1: initial phase
In order to accelerate the generation of task schedule, fit using for example improved optimal adaptation method (MBFD) of existing method or for the first time Induction method (FF) is that all user tasks that cloud computation data center receives generate initial schedule.
Step 2: the optimizing phase
The multiple mutually independent schedule optimizers of the phase instanceization, each schedule optimizer in step 1 respectively to generate Initial schedule be that input independently carries out local optimum, all schedule optimizers cooperations generate the approximation in global search spaces Optimal scheduling;It specifically includes:
(1) single schedule optimizer optimization: single schedule optimizer is based on the initial schedule that the stage inputs, iteration Three operations of evolution that are grouped and dispatch using task choosing, server of formula carry out local optimum to it.
(2) more schedule optimizer cooperative optimizations: after each schedule optimizer generates the scheduling of its local optimum, all In the scheduling that schedule optimizer generates relatively and the scheduling that selects energy consumption minimum, it is used as to initial schedule again, carries out step (1) operation;After each schedule optimizer generates the scheduling of its local optimum, compare in the scheduling that all schedule optimizers generate Compared with and select energy consumption minimum and the highest scheduling of task receptance, as initial schedule, then carry out the operation of step (1); In cycles, when operating procedure (1) number reaches 10-15 times, the approximate optimal schedule in global search space is obtained.
Wherein, initial phase in step 1: data center's scheduler program needs to request to produce for received all users A raw scheduling, that is, generate the mapping pair of user's request to server, each user request is mapped to a service On device.But if enumerating all combinations of user's request to server mappings, this is a kind of nonsensical way, therefore is needed Want some way to for one energy-saving distribution of generation as fast as possible.Using existing certain methods, such as optimal adaptation method MBFD or for the first time adaptive method FF, for all users request generate an initial schedule, however usually both initial methods The scheduling of generation still remains very big optimization space, if energy consumption is still very high, cannot all receive user's request, therefore can be with It takes certain methods to advanced optimize initial schedule, achieves the optimal of these targets.
Wherein, the task choosing in (1) of step 2: server S 1 and S2 there are two in some current server group, two All several tasks of original allocation on server.A goal task is selected for each server with the following method: every On a server, be each task computation it the sum of with the parallel execution time of other all tasks, for example, some task 1 Execute time interval be [], the execution time interval of task 2 be [], if meeting relationship between them,And, then the two tasks there are one section it is parallel execute time interval [], they and Row execute the time be.There is also other parallel execution relationships between two tasks, no longer enumerate here.When being each Task computation goes out after it executes the sum of time parallel with other all tasks, and parallel the sum of time the smallest task that executes is selected to make For goal task.This is based on such idea, if there are the parallel of a task and other tasks on some server It is shorter to execute the time, then this server is larger a possibility that resource utilization is lower during this task execution, So this task immigration is gone out, the server may be made to reach dormant state in this period.Meanwhile with other tasks Parallel when executing on time shortest task immigration to other servers, a possibility that making other server overloads, is relatively low.It is logical A goal task can be selected for each server by crossing above procedure.As shown in figure 3, server S 1 has selected task k, service Device S2 selects task m.
Wherein, in (1) of step 2 server grouping: due to user request to server mappings it is all combine into It is a very time-consuming task that row, which is enumerated, and is difficult to realize that energy consumption and user request the optimal of receptance simultaneously.Therefore it takes Didactic method, the local scheduling being absorbed in initial schedule, by optimizing realization to overall scheduling to local scheduling Optimization.Specific practice is as follows: carrying out random grouping two-by-two to the server in initial schedule, includes two clothes in each group It is engaged in device, there are several to distribute its task on each server.As shown in Fig. 2, server 1 and n-1 constitute a server Group, server k and m constitute a server group.Two-server and set of tasks therein constitute in each server group One local scheduling.
Wherein, scheduling in (1) of step 2 is developed: when Servers-all has all been randomly formed two server groups, next Local optimum can be carried out to the local scheduling in this server group using the thought to develop in evolution algorithmic.Specific practice It is as shown in Figure 3: just carry out evolution operation after server group selected target task, in server group two servers by exchange or The mode of migration task generates 4 filial generation scheduling, in order to guarantee that local scheduling will not degenerate after developing, using primitive scheduling as one A filial generation saves.Then, one of server moves to its goal task on another server, generates second filial generation Scheduling.Similar, goal task is moved to and generates third filial generation scheduling on first server by second server.Most Afterwards, two servers are exchanged with each other goal task, generate the 4th filial generation scheduling.Even if as shown in figure 3, first filial generation scheduling The copy of primitive scheduling, server S 1, which moves to task k, generates second filial generation scheduling, server S 2 in server S 2 Task m is moved to and generates another filial generation on S1, finally, S1 and S2 switching task k and m generate the 4th scheduling.Work as service After device group generates 4 filial generation scheduling, each filial generation local scheduling is emulated, calculates energy consumption and task receptance, selection Energy consumption is minimum, the highest filial generation scheduling replacement primitive scheduling of task receptance.
Wherein, single schedule optimizer optimization in (1) of step 2: since single schedule optimizer is in initial schedule Servers-all is all grouped, then it can be by develop on every a pair of of the server of operation generation to each grouping Local scheduling, all local scheduling merging just constitute the new scheduling that data center requests all users.In order to this Scheduling advanced optimizes, and single schedule optimizer is allowed to be scheduling to initial schedule, repeated packets and the behaviour that develops again with newly generated Make fixed number of times, it is final to generate the scheduling with high energy efficiency.Algorithm shown in Fig. 4 describes the mistake of single schedule optimizer in detail Journey, wherein servers is data center's Servers-all, and tasks is all users request that data center receives, Mapping is the initial schedule that initialization algorithm generates, i.e. to the mapping pair of server, niter is that dull degree is excellent for user's request Change the number that device is iterated optimization.
Wherein, more schedule optimizer cooperative optimizations in (2) of step 2: since single schedule optimizer is directed to oneself Scheduling is iterated optimization, it is easy to fall into local optimum, in order to avoid this kind of situation, while start multiple execution Device, allow they it is independent initial schedule is optimized, due to being grouped the presence of randomness, multiple schedule optimizers cooperate Local optimum can be avoided as far as possible.
The present invention can search optimal scheduling, pole under the constraint condition of cloud computation data center fixed server resource The earth reduces energy consumption, improves user and requests receptance.

Claims (3)

1. a kind of towards energy-efficient heuristic cloud computing resources distribution and dispatching method, which is characterized in that this method includes following Specific steps:
Step 1: initial phase
It the use of improved optimal adaptation method MBFD or for the first time adaptive method FF is all users that cloud computation data center receives Task generates initial schedule;
Step 2: the optimizing phase
Instantiate several mutually independent schedule optimizers, each schedule optimizer initial schedule to generate in step 1 respectively Local optimum is independently carried out for input, all independent schedule optimizer cooperations generate the near-optimization in global search space Scheduling;It specifically includes:
(1) single schedule optimizer optimization: single schedule optimizer based on the initial schedule inputted, appoint by iterative utilization Business selection, server grouping and scheduling three operations of evolution carry out local optimum to it;
(2) more schedule optimizer cooperative optimizations: after each schedule optimizer generates the scheduling of its local optimum, in all scheduling Relatively and select energy consumption minimum and the highest scheduling of task receptance in the scheduling that optimizer generates, as initial schedule, Carry out the operation of step (1);After each schedule optimizer generates the scheduling of its local optimum, generated in all schedule optimizers It relatively and selects energy consumption minimum and the highest scheduling of task receptance in scheduling, as initial schedule, then carries out step (1) Operation;In cycles, when operating procedure (1) number reaches 10-15 times, the near-optimization tune in global search space is obtained Degree;
Wherein:
The task choosing operation is that each server selects a goal task using initial schedule: in each server On, be Overlapped Execution time on each task computation it and the server between all other task and, there is minimum overlay The executing time sum of the task is selected as the goal task of the server;
Data center server is carried out random grouping two-by-two, obtains two different server groups by the server division operation; Using the initial schedule of input step 1, the local scheduling information in every server group on two servers constitutes the server group Local scheduling;
The scheduling, which is developed, to be operated, and in every server group, execution evolution operation generates 4 different filial generations and dispatches;Every height Generation scheduling generates as follows:
(1) it is dispatched the initial schedule of step 1 as first filial generation;
(2) goal task of first server is moved on second server, and modifies to correspond in initial schedule and is somebody's turn to do The local scheduling information of operation obtains second filial generation scheduling;
(3) goal task of second server is moved on first server, and modifies to correspond in initial schedule and is somebody's turn to do The local scheduling information of operation obtains third filial generation scheduling;
Goal task is exchanged between (4) two servers, and modifies the local scheduling information for corresponding to the operation in initial schedule Obtain the 4th filial generation scheduling;
After generating 4 filial generation scheduling, single schedule optimizer emulates each filial generation scheduling, calculates consumption information and task connects By rate, and select energy consumption minimum and the original initial scheduling of the highest filial generation scheduling substitution of task receptance.
2. the method as described in claim 1, which is characterized in that the generation initial schedule generates a physical server Original allocation to user task maps.
3. the method as described in claim 1, which is characterized in that the list schedule optimizer, after receiving initial schedule, root According to initial schedule, determine that data center is the server resource of each user task distribution, meanwhile, each server with distribute to Its task constitutes the local scheduling on the server.
CN201610966411.5A 2016-10-28 2016-10-28 One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method Active CN106550036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610966411.5A CN106550036B (en) 2016-10-28 2016-10-28 One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610966411.5A CN106550036B (en) 2016-10-28 2016-10-28 One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method

Publications (2)

Publication Number Publication Date
CN106550036A CN106550036A (en) 2017-03-29
CN106550036B true CN106550036B (en) 2019-05-17

Family

ID=58394511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610966411.5A Active CN106550036B (en) 2016-10-28 2016-10-28 One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method

Country Status (1)

Country Link
CN (1) CN106550036B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107436811A (en) * 2017-07-07 2017-12-05 广东工业大学 It is related to the task immigration method of task scheduling in mobile cloud problem
CN110008015B (en) * 2019-04-09 2022-09-30 中国科学技术大学 Online task dispatching and scheduling method with bandwidth limitation in edge computing system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677957A (en) * 2013-12-13 2014-03-26 重庆邮电大学 Cloud-data-center high-energy-efficiency virtual machine placement method based on multiple resources
CN103957261A (en) * 2014-05-06 2014-07-30 湖南体运通信息技术有限公司 Cloud computing resource distributing method based on energy consumption optimization
WO2015099701A1 (en) * 2013-12-24 2015-07-02 Intel Corporation Cloud compute scheduling using a heuristic contention model
CN105159762A (en) * 2015-08-03 2015-12-16 冷明 Greedy strategy based heuristic cloud computing task scheduling method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677957A (en) * 2013-12-13 2014-03-26 重庆邮电大学 Cloud-data-center high-energy-efficiency virtual machine placement method based on multiple resources
WO2015099701A1 (en) * 2013-12-24 2015-07-02 Intel Corporation Cloud compute scheduling using a heuristic contention model
CN103957261A (en) * 2014-05-06 2014-07-30 湖南体运通信息技术有限公司 Cloud computing resource distributing method based on energy consumption optimization
CN105159762A (en) * 2015-08-03 2015-12-16 冷明 Greedy strategy based heuristic cloud computing task scheduling method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
云数据中心高能效的虚拟机放置算法;豆育升,等;《小型微型计算机系统》;20141130;第35卷(第11期);第81~84页
基于启发式算法的数据迁移策略;孙阳,等;《吉林建筑大学学报》;20160630;第33卷(第3期);第2543~2547页

Also Published As

Publication number Publication date
CN106550036A (en) 2017-03-29

Similar Documents

Publication Publication Date Title
Hussain et al. Energy and performance-efficient task scheduling in heterogeneous virtualized cloud computing
Meshkati et al. Energy-aware resource utilization based on particle swarm optimization and artificial bee colony algorithms in cloud computing
Luo et al. Hybrid shuffled frog leaping algorithm for energy-efficient dynamic consolidation of virtual machines in cloud data centers
Guan et al. Energy efficient virtual network embedding for green data centers using data center topology and future migration
Ghetas A multi-objective Monarch Butterfly Algorithm for virtual machine placement in cloud computing
Gao et al. An energy-aware ant colony algorithm for network-aware virtual machine placement in cloud computing
Xu et al. Migration cost and energy-aware virtual machine consolidation under cloud environments considering remaining runtime
Chen et al. EONS: minimizing energy consumption for executing real-time workflows in virtualized cloud data centers
CN104679594A (en) Middleware distributed calculating method
Hanani et al. A multi-parameter scheduling method of dynamic workloads for big data calculation in cloud computing
Rasouli et al. EPBLA: energy-efficient consolidation of virtual machines using learning automata in cloud data centers
Peng et al. A reinforcement learning-based mixed job scheduler scheme for cloud computing under SLA constraint
CN106550036B (en) One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method
Lin et al. A heuristic task scheduling algorithm for heterogeneous virtual clusters
CN107908464A (en) A kind of cloud computing workflow energy-saving scheduling method for considering reliability
Lv et al. TBTOA: A dag-based task offloading scheme for mobile edge computing
Bui et al. Energy efficiency in cloud computing based on mixture power spectral density prediction
Guo et al. Hydrozoa: Dynamic hybrid-parallel dnn training on serverless containers
Hu et al. Research of scheduling strategy on OpenStack
Jiang et al. Network-aware virtual machine migration based on gene aggregation genetic algorithm
Rasouli et al. Virtual machine placement in cloud systems using learning automata
Badr et al. Task consolidation based power consumption minimization in cloud computing environment
Liang et al. A high-applicability heterogeneous cloud data centers resource management algorithm based on trusted virtual machine migration
CN104683480A (en) Distribution type calculation method based on applications
Su et al. A power-aware virtual machine mapper using firefly optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant