CN106550036A - A kind of heuristic cloud computing resources distribution and dispatching method towards energy-conservation - Google Patents
A kind of heuristic cloud computing resources distribution and dispatching method towards energy-conservation Download PDFInfo
- Publication number
- CN106550036A CN106550036A CN201610966411.5A CN201610966411A CN106550036A CN 106550036 A CN106550036 A CN 106550036A CN 201610966411 A CN201610966411 A CN 201610966411A CN 106550036 A CN106550036 A CN 106550036A
- Authority
- CN
- China
- Prior art keywords
- scheduling
- server
- schedule
- task
- initial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Abstract
The invention discloses a kind of heuristic cloud computing resources towards energy-conservation distribute and dispatching method, the method includes:It is that the task requests that all users submit to produce a task initial schedule first with first-fit algorithm FF or improved optimal adaptation algorithm MBFD;Secondly, it is proposed that a kind of heuristic mutation operations optimization method, iterative optimization initial schedule, it is final to obtain a scheduling that significantly reduced consumption of data center, improve user's request receptance.The present invention effectively can search out the approximate optimal solution of resource allocation scheduling for the task requests that cloud computing user sends, and effectively reduce consumption of data center.
Description
Technical field
The invention belongs to computer realm, is absorbed in cloud computing resources distribution and Mission Scheduling, more particularly to it is a kind of
Towards heuristic cloud computing resources distribution and the dispatching method of energy-conservation.
Background technology
There are considerable cloud computing resources distribution and dispatching method at present, it is specific as follows:
Using dynamic voltage frequency regulation technology(Dynamic Voltage and Frequency Scaling, abridge DVFS)
With dynamic power management technology (Dynamic Power Management, abridge DPM).They are all a kind of hardware technologies.Mesh
The processor of anterior sub-server supports DVFS technologies, and it is referred to when processor utilization is relatively low, reduces the electricity of processor
Pressure or frequency, reduce the performance of processor so as to be maintained at low power consumpting state;And when the load of processor is higher, at raising
The voltage or frequency of reason device, improves the performance of processor, and now, the power consumption of processor is also of a relatively high.It is directed to by this
The load of processor is dynamically adjusted the mode of processor voltage or frequency, realizes the reduction of energy consumption.DPM technologies are referred to
When server component is not loaded, it is switched to resting state or closed mode, so as to realize energy-conservation.But current many factories
Business is not equipped with these technologies to ensure stablizing for server performance, therefore, it is being assembled with the enterprise or group of common server
In vivo in the private clound in portion, these technologies can not be applied.
By server Dynamic Integration come reducing energy consumption.Due to server resource utilization rate it is relatively low when, can still consume compared with
High energy consumption, therefore server resource utilization rate can be improved, improve energy resource efficiency, reducing energy consumption.Intel Virtualization Technology
Appearance allows virtual machine to carry out dynamic migration, can realize the integration of server using this technology, low-load is serviced
Idling so can be carried server closing, and achievement unit sub-service on other low-load servers by the virtual machine (vm) migration on device
Business device resource utilization is improved.Although this server Dynamic Integration technology can improve server resource utilization rate, empty
The migration of plan machine will necessarily affect Consumer's Experience, the such as prolongation of task execution time, therefore this method to be not appropriate for being applied to one
The higher scene of a little time-constrains.
The content of the invention
It is an object of the invention to provide a kind of didactic cloud resource provision and dispatching method towards energy-conservation, the method energy
Enough approximate optimal solutions that resource allocation scheduling is effectively searched out for the task requests that cloud computing user sends, effectively reduce number
According to power consumption.
The concrete technical scheme for realizing the object of the invention is:
A kind of heuristic cloud computing resources distribution and dispatching method towards energy-conservation, feature is that the method includes walking in detail below
Suddenly:
Step 1:Initial phase
In order to accelerate the generation of task scheduling, the use of improved optimal adaptation method MBFD or first adaptive method FF is cloud computing
All user tasks that data center receives produce initial schedule.
Step 2:Optimizing phase
The several separate schedule optimizers of instantiation, each schedule optimizer initial schedule respectively to produce in step 1
Local optimum is carried out independently for input, all independent schedule optimizer cooperations produce the near-optimization in global search space
Scheduling;Wherein, specifically include:
(1) single schedule optimizer optimization:Single schedule optimizer based on the initial schedule being input into, appoint by iterative utilization
Business is selected, server packet and scheduling three operations of evolution carry out local optimum to which;
(2) many schedule optimizer cooperative optimizations:After each schedule optimizer produces its local optimum to be dispatched, in all scheduling
Optimizer produce scheduling in relatively and select energy consumption minimum and task receptance highest scheduling, as initial schedule,
Carry out the operation of step (1);After each schedule optimizer produces its local optimum to be dispatched, produce in all schedule optimizers
In scheduling relatively and select energy consumption minimum and task receptance highest scheduling, as initial schedule, then carry out step (1)
Operation;Go round and begin again, when operating procedure (1) number of times reaches 10-15 time, the near-optimization obtained in global search space is adjusted
Degree.
The generation initial schedule, that is, produce a physical server and map to the original allocation of user task.
Single schedule optimizer, after initial schedule is received, according to initial schedule, determines data center for each use
The server resource of family task distribution, meanwhile, each server constitutes the local on the server with the task of distributing to it
Scheduling.
The task choosing operation, selects a goal task for each server using initial schedule:In each clothes
On business device, be Overlapped Execution time on each task computation it and server between all other task and, with minimum
The task of Overlapped Execution time sum is selected as the goal task of the server.
Data center server is carried out random packet two-by-two, obtains two different services by the server division operation
Device group;Using the initial schedule of input step 1, per server group in the local scheduling information structure service on two servers
The local scheduling of device group.
The scheduling is developed and is operated, and in every server group, execution evolution operation produces 4 different filial generations and dispatches;Often
Individual filial generation scheduling is produced as follows:
(1) dispatch the initial schedule of step 1 as first filial generation;
(2) goal task of first server is moved on second server, and this is corresponded in changing initial schedule
The local scheduling information of operation obtains second filial generation scheduling;
(3) goal task of second server is moved on first server, and this is corresponded in changing initial schedule
The local scheduling information of operation obtains the 3rd filial generation scheduling;
Goal task is exchanged between (4) two servers, and in changing initial schedule, corresponds to the local scheduling information of the operation
Obtain the 4th filial generation scheduling;
After producing 4 filial generation scheduling, single schedule optimizer is emulated to each filial generation scheduling, calculates consumption information and task connects
By rate, and select energy consumption minimum and the original initial scheduling of task receptance highest filial generation scheduling replacement.
The iterative utilization task choosing, server packet and scheduling develop three number of operations for 30-50 time.
List schedule optimizer of the invention is an iterative operation, and, after an iteration, selection energy consumption is minimum, and task connects for it
Dispatched as new initial schedule by the filial generation of rate highest, iterative task is selected, server is grouped and dispatches operation of developing, until
Certain iterationses terminate.
The many schedule optimizer cooperative optimizations of the present invention need each schedule optimizer it is independent carry out randomized grouping behaviour
Make, so just guarantee approximate optimal schedule is searched in global space.
The beneficial effects of the present invention is:The present invention can be in the pact of cloud computation data center fixed physical server resource
Approximate optimal schedule is searched under the conditions of beam, energy consumption is greatly reduced, is improved user's request receptance.
Description of the drawings
Fig. 1 is flow chart of the present invention;
Fig. 2 is that server of the present invention is grouped schematic diagram;
Fig. 3 is present invention scheduling evolution operational flowchart;
Fig. 4 is present invention list schedule optimizer optimization implementation procedure figure.
Specific embodiment
With reference to specific examples below and accompanying drawing, the present invention is described in further detail.
The present invention is a kind of heuristic cloud resource allocation and dispatching method towards energy-conservation, is the task requests that user submits to
Produce the scheduling of energy-conservation.Initial schedule is produced by initial method first, then by evolutionary optimization process to initial schedule
It is optimized:For server selection target task, server is grouped at random, and each packet is produced after local scheduling
In generation, optimal partial scheduling is selected to replace the scheduling in the original packet by emulating.Multiple optimization executors work independently, and find
Global optimum dispatches.
The present invention is comprised the following steps, and idiographic flow is as shown in Figure 1:
Step 1:Initial phase
In order to accelerate the generation of task scheduling, using for example improved optimal adaptation method of existing method(MBFD)Or adaptation side first
Method (FF) produces initial schedule for all user tasks that cloud computation data center is received.
Step 2:Optimizing phase
The multiple separate schedule optimizers of the phase instanceization, each schedule optimizer are first with what is produced in step 1 respectively
Beginning is scheduling to input and independently carries out local optimum, and all schedule optimizer cooperations produce the near-optimization in global search space
Scheduling;Specifically include:
(1) single schedule optimizer optimization:Single schedule optimizer is based on the initial schedule of the stage input, iterative
Local optimum is carried out to which using task choosing, server packet and scheduling three operations of evolution.
(2) many schedule optimizer cooperative optimizations:After each schedule optimizer produces its local optimum to be dispatched, all
In the scheduling that schedule optimizer is produced relatively and the scheduling that selects energy consumption minimum, using which again as initial schedule, carry out step
(1) operation;After each schedule optimizer produces its local optimum to be dispatched, compare in the scheduling that all schedule optimizers are produced
Compared with and select energy consumption minimum and the scheduling of task receptance highest, as initial schedule, then the operation for carrying out step (1);
Go round and begin again, when operating procedure (1) number of times reaches 10-15 time, obtain the approximate optimal schedule in global search space.
Wherein, initial phase in step 1:Data center's scheduler program needs to produce for its all user's request for receiving
A raw scheduling, that is, the mapping for producing user's request a to server are right, and each user's request is mapped to a service
On device.But if enumerating user's request to all combinations of server mappings, this is a kind of nonsensical way, therefore need
Some way is wanted to for one energy-saving distribution of generation as fast as possible.Using existing certain methods, such as optimal adaptation method
MBFD or first adaptive method FF, are one initial schedule of all user's requests generations, but generally both initial methods
Still there is very big optimization space in the scheduling of generation, such as energy consumption is still very high, it is impossible to all receive user's request, therefore can be with
Certain methods are taken further to optimize initial schedule so as to realize the optimum of these targets.
Wherein, the task choosing in (1) of step 2:There are two server Ss 1 and S2 in current certain server group, two
All several tasks of original allocation on server.Adopt a goal task is selected for each server with the following method:At each
On server, it is its executed in parallel time sum with other all tasks of each task computation, for example, the execution of certain task 1
Time interval for [], the execution time interval of task 2 for [], if relation is met between them,And, then the two tasks exist one section of executed in parallel time interval [], their executed in parallel time
For.Also there are other executed in parallel relations between two tasks, no longer enumerate here.When going out for each task computation
It is performed after time sum with other all tasks in parallel, and task of selecting executed in parallel time sum minimum is appointed as target
Business.This is based on such idea, if on certain server, there is the executed in parallel time of a task and other tasks
It is shorter, then this server relatively low probability of resource utilization during this tasks carrying is larger, then by this
Individual task immigration is gone out, and the server may be made to reach resting state in this period.Meanwhile, perform with other tasks in parallel
When time most short task immigration is on other servers so as to which his probability of server overload is relatively low.By above mistake
Journey can select a goal task for each server.As shown in figure 3, server S 1 have selected task k, server S 2 is selected
Task m.
Wherein, the server packet in (1) of step 2:Due to user's request to server mappings it is all combine into
It is a very time-consuming task that row is enumerated, and is difficult to realize the optimum of energy consumption and user's request receptance simultaneously.Therefore take
Didactic method, the local scheduling being absorbed in initial schedule, by realization is optimized to local scheduling to overall scheduling
Optimization.Specific practice is as follows:Random packet two-by-two is carried out to the server in initial schedule, is taken comprising two in each group
Business device, has several to distribute its task on each server.As shown in Fig. 2 server 1 and n-1 constitute a server
Group, server k and m constitute a server group.In each server group, two-server and set of tasks therein are constituted
One local scheduling.
Wherein, the scheduling in (1) of step 2 is developed:When Servers-all has all been randomly formed two server groups, next
The thought that can develop in adopting evolution algorithmic, carries out local optimum to the local scheduling in this server group.Specific practice
As shown in Figure 3:Just evolution operation is carried out after server group selected target task, in server group two servers by exchanging or
The mode of migration task produces 4 filial generations scheduling, in order to ensure that local scheduling will not be degenerated after developing, using primitive scheduling as one
Individual filial generation is preserved.Then, one of server moves to its goal task on another server, produces second filial generation
Scheduling.Similar, goal task is moved to and produce on first server the 3rd filial generation scheduling by second server.Most
Afterwards, two servers are exchanged with each other goal task, produce the 4th filial generation scheduling.Even if as shown in figure 3, first filial generation scheduling
Task k is moved to and produce in server S 2 second filial generation scheduling, server S 2 by the copy of primitive scheduling, server S 1
Task m is moved to and produce on S1 another filial generation, finally, S1 and S2 switching tasks k and m produce the 4th scheduling.Work as service
After device group produces 4 filial generation scheduling, each filial generation local scheduling is emulated, energy consumption and task receptance is calculated, is selected
Energy consumption is minimum, and primitive scheduling is replaced in task receptance highest filial generation scheduling.
Wherein, the single schedule optimizer optimization in (1) of step 2:As single schedule optimizer is in initial schedule
Servers-all is all grouped, then it can be by carrying out developing on every a pair of the servers of operation generation to each packet
Local scheduling, all of local scheduling merges and just constitutes new scheduling of the data center to all user's requests.In order to this
Scheduling further optimizes, and allows single schedule optimizer again with the initial schedule that is scheduling to of new generation, repeated packets and the behaviour that develops
Make fixed number of times, it is final to produce a scheduling with high energy efficiency.Algorithm shown in Fig. 4 describes the mistake of single schedule optimizer in detail
Journey, wherein servers are data center's Servers-alls, and tasks is all user's requests that data center receives,
Mapping is the initial schedule that initialization algorithm is produced, i.e. the mapping of user's request to server is right, and niter is that dull degree is excellent
Change the number of times that device is iterated optimization.
Wherein, many schedule optimizer cooperative optimizations in (2) of step 2:As single schedule optimizer is directed to oneself
Scheduling is iterated optimization, it is easy to be absorbed in local optimum, in order to avoid the generation of such case, while starting multiple execution
Device, allow they it is independent initial schedule is optimized, due to be grouped randomness presence, multiple schedule optimizer collaborative works
Local optimum is avoided as far as possible can.
The present invention can search optimal scheduling, pole under the constraints of cloud computation data center fixed server resource
The earth reducing energy consumption, improves user's request receptance.
Claims (6)
1. a kind of heuristic cloud computing resources towards energy-conservation distribute and dispatching method, it is characterised in that the method includes following
Concrete steps:
Step 1:Initial phase
Using improved optimal adaptation method MBFD or adaptive method FF is received for cloud computation data center first all users
Task produces initial schedule;
Step 2:Optimizing phase
The several separate schedule optimizers of instantiation, each schedule optimizer initial schedule respectively to produce in step 1
Local optimum is carried out independently for input, all independent schedule optimizer cooperations produce the near-optimization in global search space
Scheduling;Wherein, specifically include:
(1) single schedule optimizer optimization:Single schedule optimizer based on the initial schedule being input into, appoint by iterative utilization
Business is selected, server packet and scheduling three operations of evolution carry out local optimum to which;
(2) many schedule optimizer cooperative optimizations:After each schedule optimizer produces its local optimum to be dispatched, in all scheduling
Optimizer produce scheduling in relatively and select energy consumption minimum and task receptance highest scheduling, as initial schedule,
Carry out the operation of step (1);After each schedule optimizer produces its local optimum to be dispatched, produce in all schedule optimizers
In scheduling relatively and select energy consumption minimum and task receptance highest scheduling, as initial schedule, then carry out step (1)
Operation;Go round and begin again, when operating procedure (1) number of times reaches 10-15 time, the near-optimization obtained in global search space is adjusted
Degree.
2. the method for claim 1, it is characterised in that the generation initial schedule, that is, produce a physical server
Original allocation to user task maps.
3. the method for claim 1, it is characterised in that single schedule optimizer, after initial schedule is received, root
According to initial schedule, determine the server resource that data center is each user task distribution, meanwhile, each server with distribute to
Its task constitutes the local scheduling on the server.
4. the method for claim 1, it is characterised in that the task choosing operation, is each using initial schedule
Server selects a goal task:On each server, be each task computation it with the server on all other
The Overlapped Execution time between business is with the task of performing time sum with minimum overlay is selected as the target of the server and appoints
Business.
5. the method for claim 1, it is characterised in that the server division operation, data center server is entered
Row is random two-by-two to be grouped, and obtains two different server groups;Using the initial schedule of input step 1, two in every server group
The local scheduling of the local scheduling information structure server group on server.
6. the method for claim 1, it is characterised in that the scheduling is developed operation, and in every server group, execution is drilled
Change operation and produce 4 different filial generation scheduling;Each filial generation scheduling is produced as follows:
(1) dispatch the initial schedule of step 1 as first filial generation;
(2) goal task of first server is moved on second server, and this is corresponded in changing initial schedule
The local scheduling information of operation obtains second filial generation scheduling;
(3) goal task of second server is moved on first server, and this is corresponded in changing initial schedule
The local scheduling information of operation obtains the 3rd filial generation scheduling;
Goal task is exchanged between (4) two servers, and in changing initial schedule, corresponds to the local scheduling information of the operation
Obtain the 4th filial generation scheduling;
After producing 4 filial generation scheduling, single schedule optimizer is emulated to each filial generation scheduling, calculates consumption information and task connects
By rate, and select energy consumption minimum and the original initial scheduling of task receptance highest filial generation scheduling replacement.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610966411.5A CN106550036B (en) | 2016-10-28 | 2016-10-28 | One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610966411.5A CN106550036B (en) | 2016-10-28 | 2016-10-28 | One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106550036A true CN106550036A (en) | 2017-03-29 |
CN106550036B CN106550036B (en) | 2019-05-17 |
Family
ID=58394511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610966411.5A Active CN106550036B (en) | 2016-10-28 | 2016-10-28 | One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106550036B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107436811A (en) * | 2017-07-07 | 2017-12-05 | 广东工业大学 | It is related to the task immigration method of task scheduling in mobile cloud problem |
CN110008015A (en) * | 2019-04-09 | 2019-07-12 | 中国科学技术大学 | The online task for having bandwidth to limit in edge calculations system assigns dispatching method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103677957A (en) * | 2013-12-13 | 2014-03-26 | 重庆邮电大学 | Cloud-data-center high-energy-efficiency virtual machine placement method based on multiple resources |
CN103957261A (en) * | 2014-05-06 | 2014-07-30 | 湖南体运通信息技术有限公司 | Cloud computing resource distributing method based on energy consumption optimization |
WO2015099701A1 (en) * | 2013-12-24 | 2015-07-02 | Intel Corporation | Cloud compute scheduling using a heuristic contention model |
CN105159762A (en) * | 2015-08-03 | 2015-12-16 | 冷明 | Greedy strategy based heuristic cloud computing task scheduling method |
-
2016
- 2016-10-28 CN CN201610966411.5A patent/CN106550036B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103677957A (en) * | 2013-12-13 | 2014-03-26 | 重庆邮电大学 | Cloud-data-center high-energy-efficiency virtual machine placement method based on multiple resources |
WO2015099701A1 (en) * | 2013-12-24 | 2015-07-02 | Intel Corporation | Cloud compute scheduling using a heuristic contention model |
CN103957261A (en) * | 2014-05-06 | 2014-07-30 | 湖南体运通信息技术有限公司 | Cloud computing resource distributing method based on energy consumption optimization |
CN105159762A (en) * | 2015-08-03 | 2015-12-16 | 冷明 | Greedy strategy based heuristic cloud computing task scheduling method |
Non-Patent Citations (2)
Title |
---|
孙阳,等: "基于启发式算法的数据迁移策略", 《吉林建筑大学学报》 * |
豆育升,等: "云数据中心高能效的虚拟机放置算法", 《小型微型计算机系统》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107436811A (en) * | 2017-07-07 | 2017-12-05 | 广东工业大学 | It is related to the task immigration method of task scheduling in mobile cloud problem |
CN110008015A (en) * | 2019-04-09 | 2019-07-12 | 中国科学技术大学 | The online task for having bandwidth to limit in edge calculations system assigns dispatching method |
CN110008015B (en) * | 2019-04-09 | 2022-09-30 | 中国科学技术大学 | Online task dispatching and scheduling method with bandwidth limitation in edge computing system |
Also Published As
Publication number | Publication date |
---|---|
CN106550036B (en) | 2019-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Energy-efficient offloading for DNN-based smart IoT systems in cloud-edge environments | |
Liu et al. | Job scheduling model for cloud computing based on multi-objective genetic algorithm | |
Pourghebleh et al. | The importance of nature-inspired meta-heuristic algorithms for solving virtual machine consolidation problem in cloud environments | |
Wang et al. | A new multi-objective bi-level programming model for energy and locality aware multi-job scheduling in cloud computing | |
Jiang et al. | DataABC: A fast ABC based energy-efficient live VM consolidation policy with data-intensive energy evaluation model | |
CN112286677B (en) | Resource-constrained edge cloud-oriented Internet of things application optimization deployment method | |
Gharehpasha et al. | Virtual machine placement in cloud data centers using a hybrid multi-verse optimization algorithm | |
Ghetas | A multi-objective Monarch Butterfly Algorithm for virtual machine placement in cloud computing | |
Gao et al. | An energy-aware ant colony algorithm for network-aware virtual machine placement in cloud computing | |
Mao et al. | A multi-resource task scheduling algorithm for energy-performance trade-offs in green clouds | |
Xu et al. | Migration cost and energy-aware virtual machine consolidation under cloud environments considering remaining runtime | |
Ma et al. | Real-time virtual machine scheduling in industry IoT network: A reinforcement learning method | |
Lin et al. | A two-stage framework for the multi-user multi-data center job scheduling and resource allocation | |
Liang et al. | A low-power task scheduling algorithm for heterogeneous cloud computing | |
Supreeth et al. | Hybrid genetic algorithm and modified-particle swarm optimization algorithm (GA-MPSO) for predicting scheduling virtual machines in educational cloud platforms | |
Asghari et al. | Combined use of coral reefs optimization and multi-agent deep Q-network for energy-aware resource provisioning in cloud data centers using DVFS technique | |
Xu et al. | Optimized renewable energy use in green cloud data centers | |
Zhou et al. | Deep reinforcement learning-based algorithms selectors for the resource scheduling in hierarchical cloud computing | |
Goyal et al. | A proposed approach for efficient energy utilization in cloud data center | |
CN106550036B (en) | One kind is towards energy-efficient heuristic cloud computing resources distribution and dispatching method | |
Guo et al. | Hydrozoa: Dynamic hybrid-parallel dnn training on serverless containers | |
Zhang et al. | A two-stage container management in the cloud for optimizing the load balancing and migration cost | |
Ge et al. | Cloud computing task scheduling strategy based on improved differential evolution algorithm | |
Al Sallami et al. | Load balancing with neural network | |
Hu et al. | Research of scheduling strategy on OpenStack |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |