AU2021103249A4 - A novel multi-level optimization for task scheduling and load balancing in cloud - Google Patents

A novel multi-level optimization for task scheduling and load balancing in cloud Download PDF

Info

Publication number
AU2021103249A4
AU2021103249A4 AU2021103249A AU2021103249A AU2021103249A4 AU 2021103249 A4 AU2021103249 A4 AU 2021103249A4 AU 2021103249 A AU2021103249 A AU 2021103249A AU 2021103249 A AU2021103249 A AU 2021103249A AU 2021103249 A4 AU2021103249 A4 AU 2021103249A4
Authority
AU
Australia
Prior art keywords
cloud
task scheduling
load
optimization
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021103249A
Inventor
Shahnawaz Ahmad
Dumala Anveshini
S. Devaraju
Pavithra G.
Sushma Jaiswal
Tarun JAISWAL
Shabana Mehfuz
Y. V. Raghavarao
Rabinarayan Satpathy
Manas Ranjan Senapati
Mandadi Srinivas
T. Vetriselvi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anveshini Dumala Dr
Devaraju S Dr
Mehfuz Shabana Dr
Raghavarao Y V Dr
Senapati Manas Ranjan Dr
Srinivas Mandadi Dr
Vetriselvi T Dr
Original Assignee
Anveshini Dumala Dr
Devaraju S Dr
Mehfuz Shabana Dr
Raghavarao Y V Dr
Satpathy Rabinarayan Dr
Senapati Manas Ranjan Dr
Srinivas Mandadi Dr
Vetriselvi T Dr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anveshini Dumala Dr, Devaraju S Dr, Mehfuz Shabana Dr, Raghavarao Y V Dr, Satpathy Rabinarayan Dr, Senapati Manas Ranjan Dr, Srinivas Mandadi Dr, Vetriselvi T Dr filed Critical Anveshini Dumala Dr
Priority to AU2021103249A priority Critical patent/AU2021103249A4/en
Application granted granted Critical
Publication of AU2021103249A4 publication Critical patent/AU2021103249A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A NOVEL MULTI-LEVEL OPTIMIZATION FOR TASK SCHEDULING AND LOAD BALANCING IN CLOUD ABSTRACT Cloud Computing provides different computing resources to users via intelligently connected machines such as servers, virtual machines, and load balancers. The cloud responds to the user's or client's request by providing the requested resources or services. Due to heavy load across the Cloud's nodes, when multiple requests are received from users or clients, the cloud will not respond to the requests. The fundamental challenge with cloud computing is that user tasks must be scheduled as fast as they are requested by users, while still maintaining a high level of service quality (QoS). Two significant challenges must be solved in order to improve cloud computing QoS: load balance and task scheduling. The present invention disclosed herein is a Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud comprising of User Requests (201), Load Balancer (202), Load Predictor (203), Normal Loads (204), Abnormal Loads (205), Firefly Optimization (206), Merge Sort-PSO (207), Task Assign (208), and Performance (209); provides multi-level optimization for Task Scheduling and Load Balancing in Cloud environment. The present invention uses Firefly Optimization and Merge Sort-Particle Swarm Optimization (PSO) to effectively schedule tasks and balance normal and abnormal loads. The overall workloads are shares among all the Virtual Machines (VMs) in the cloud to maintain Load Balancing. The workloads are classified by the Density Long Short Term Memory (DLSTM) clustering method used as Load Predictor. The proposed invention searches for the best fit resource intended for user request and allocate the required resource to the user request. In the abnormal load conditions, the proposed invention shows better resources allocation to the Virtual Machines (VMs) in the cloud. The present invention disclosed herein showing the better performance in the form of Task Scheduling Efficiency of 98%, Task Scheduling Time of 46ms, and Energy Consumption of 56 Joules. 2/3 Begin YES < Treq:<NyMS NO Initialize Firefly Optimizer Stop Find VM Brightness Assign Rank Select Top Ranked VM Next Treq Figure 3: Flow Chart for Firefly Optimization.

Description

2/3
Begin
YES
< Treq:<NyMS NO
Initialize Firefly Optimizer Stop
Find VM Brightness
Assign Rank
Select Top Ranked VM
Next Treq
Figure 3: Flow Chart for Firefly Optimization.
A NOVEL MULTI-LEVEL OPTIMIZATION FOR TASK SCHEDULING AND LOAD BALANCING IN CLOUD FIELD OF INVENTION
[0001] The present and proposed invention relates to the technical field of Computer Science.
[0002] Particularly, the present and proposed invention is related to a Novel Multi Level Optimization for Task Scheduling and Load Balancing in Cloud of the broader field of Cloud Computing in Computer Science.
[0003] More particularly, the present invention relates to a Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud provides multi-level optimization in scheduling user's task efficiently and distributing the load uniformly. In normal and abnormal workload conditions, this invention shares the overall workloads among all the Virtual Machines (VMs) in the cloud to maintain Load Balancing. The multi-level optimization is provided with the Firefly optimization and Merge-Sort Particle Swarm Optimization (PSO).
BACKGROUND OF INVENTION
[0004] The difficulty of creating a business environment was simplified by cloud computing through its efficient, scalable, and cost effective offerings of technology. The other side of the cloud progress needs a lot of management focus proportional to its growth. The technical advantages should always have their other side, such as maintaining the supply capabilities for managing elastic requirements and delivering reliable services. Any dynamic IT demands to set up a company or company can efficiently and economically use cloud technologies. A company can use the cloud to focus on its objective. The influence of cloud computing in nearly all fields shows that the continuous development of the technology is beyond the forecast. Cloud services today are integrated into the lifestyle of people to improve their livelihoods.
[0005] Virtualization is the technology underlying the three different services provided by the cloud. It allows all cloud services, such as a capable server, virtual platform, virtual storage, and network, to be available as virtual resources. Virtualization utilizes the scalable potentials of computer resources to the maximum. By implementing proper load balance strategies, the major problem of virtualization can be addressed while maximizing the use of the computing resources. All cloud requests are mapped to depend on current availability and design for the corresponding virtual resource. The requested virtual resources are managed as Virtual Machines (VMs). The Virtual Machines Management (VMM), also referred to as a hypervisor, manages the rights of the VM from the time it is created to the time it is destroyed. In order to maximize their availability and potential by virtualization technology, VMM manages a wide range of resources such as the storage, platform and server. Some of the popular examples in VMM are kernel-based virtual machine (KVM), Xen, VMW are and Hyper-V. Cloud computing is an important technology for data storage and internet access. The number of cloud tasks is large and a number of tasks are handled by the system.
[0006] Cloud resources are insufficient and it is not easy to use in heterogeneous cloud settings with numerous features in different applications. There are two processes in the cloud computing, namely allocation of resources and task planning. Because of the progressive nature of cloud environments, the virtual machines load imbalances. In the cloud settings, a task scheduler must balance the load with each accessible node assigning full load. Load balancing between nodes involves using a wide range of resources to improve service quality. A great deal of research has been done to plan cloud-based tasks. However, during the abnormal working conditions, task scheduling was not sufficient. The optimization techniques have been implemented to reduce the time used for cloud load balance. In the cloud, due to the considerable number of user requests on the server a resource-efficient load balance is demanding. The task scheduling efficiency is greatly increased with the dynamic workload balance on the Cloud Server (CS). Many innovative works were carried out to balance loads on the cloud. However, the use of resources was not reduced sufficiently.
[0007] Cloud Computing provides different computing resources to users via intelligently connected machines such as servers, virtual machines, and load balancers.
The cloud responds to the user's or client's request by providing the requested resources or services. Due to heavy load across the Cloud's nodes, when multiple requests are received from users or clients, the cloud will not respond to the requests. The main challenge with cloud computing is that user tasks must be scheduled as soon as they are requested by users, while also maintaining Quality of Service (QoS). To improve the QoS of cloud computing, two major issues must be addressed: task scheduling and load balancing. There is a more need of using multiple optimizations methods to obtain better Task Scheduling Efficiency and perfect load balancing.
SUMMARY OF INVENTION
[0008] the present invention of the disclosure that is a Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud comprising of User Requests (201), Load Balancer (202), Load Predictor (203), Normal Loads (204), Abnormal Loads (205), Firefly Optimization (206), Merge Sort-PSO (207), Task Assign (208), and Performance (209), in accordance with the main exemplary embodiment of the present disclosure, accompanied drawing, provides multi-level optimization in scheduling user's task efficiently and distributing the load uniformly. In normal and abnormal workload conditions, this invention shares the overall workloads among all the Virtual Machines (VMs) in the cloud to maintain Load Balancing. The multi-level optimization is provided with the Firefly optimization and Merge-Sort Particle Swarm Optimization (PSO). The User Requests (201) their tasks by some applications from anywhere and anytime. Through the Load Balancer (202) only the clients or the User Requests (201) their tasks to the Cloud Server (CS) in Cloud platform. The User Requests (201) rate is to be determined upon the receiving the requests. The Load Predictor (203) determines the requests rate of the User Requests (201). The Density Long Short Term Memory (DLSTM) clustering method is used as Load Predictor (203) in the present disclosure. If the task requests are less then the tasks are clustered as normal otherwise abnormal. The Normal Loads (204) are the normal workload requests received from the user (201) and the multiple requests from the user (201) are treated as Abnormal Loads (205). In Normal Loads (204) condition, the free virtual machines are available to provide the service whereas in Abnormal Loads (205) conditions may not.
[0009] The Firefly Optimization (206) and Merge-Sort PSO (207) are the multiple optimization methods used in the main embodiment to schedule the user's requested tasks and to equally distributing the workload among the Virtual Machines of the Cloud. When there are Normal workloads (204) requests are received, Firefly Optimization (206) shows the flashing fireflies to find the optimal number of Virtual Machines available in the Cloud Server of the Cloud platform. The Virtual Machine (VM)'s brightness is computed and ranked to assign the tasks. When there are abnormal workloads (205) requests are received, Merge-Sort PSO (207) uses dynamic resource allocation with merge-sort method to maintain the workloads balance. Both the optimization methods provide multistage scheduling of resources and balancing the workloads at different load requests. In the present invention, both the optimizers can have the ability of handling the any type of the workload balancing conditions. The invention can be made in such a way that if one optimizer fails, the second optimizer can also handle all different work load conditions. The major advantage of this invention is use of two optimizers as multistage. The optimizers can also be cascaded to facilitate the multi staging capability. The performance (209) metrics of the invention are the endorsement of the invention disclosed herein. The present invention disclosed herein showing the better performance in the form of Task Scheduling Efficiency of 98%, Task Scheduling Time of 46ms, and Energy Consumption of 56 Joules for the number of user requests 300.
[0010] The Summary of the Invention, as well as the attached sketches and the Detailed Description of the Invention, describe the present invention in various levels of detail, and the inclusion or omission of components, sections, or the extent of the present disclosure is not meant to be limited by anything else in this Summary of the Invention. For a clearer understanding of the current disclosure, read the summary of the invention alongside the thorough explanation.
BRIEF DESCRIPTION OF DRAWINGS
[0011] The accompanying illustrations are included in this specification and form part of the understanding of innovation. The diagram displays examples of the current disclosure when it is read in connection with the description and helps to fully understand the principles of the disclosure. The drawings shall only be for the purposes of illustration and shall not limit the scope of this present disclosure. The elements are similar but not the same as the reference numbers shown. In order to classify related components, different reference numerals may on the other hand be used. Some embodiments might lack such elements and/or components, whereas others may use elements or components not depicted in the sketches.
[0012] Referring to Figure 1, illustrates General Resources Scheduling in Cloud Computing comprising of main elements such as Users (101), Load Balancer (102), Resource State (103), Rescheduling (104), Action (105), and Resource Pool (106), in accordance with another exemplary embodiment of the present disclosure to understand scheduling user's task and distributing the loads uniformly in cloud computing under abnormal workloads condition. This drawing is considered to understand the minimum steps required in cloud environment for task scheduling and load balancing, the invention is not limited to this drawing, and this illustration is provided to assist comprehension of the disclosure and should not be construed as restricting the depth, nature, or applicability of the disclosure.
[0013] Referring to Figure 2, illustrates the present invention of the disclosure that is a Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud comprising of User Requests (201), Load Balancer (202), Load Predictor (203), Normal Loads (204), Abnormal Loads (205), Firefly Optimization (206), Merge Sort-PSO (207), Task Assign (208), and Performance (209), in accordance with the main exemplary embodiment of the present disclosure, accompanied drawing. This drawing is considered to understand the present disclosure, some elements and/or components may not be present in embodiments, and others may be used in different ways than those depicted in the sketches, so this example is provided to assist comprehension of the disclosure. The use of singular terminology to describe a component or element may encompass a plural number of such components or elements, depending on the context, and vice versa.
[0014] Referring to Figure 3, illustrates the Flow Chart for Firefly Optimization, in accordance with another exemplary embodiment of the present disclosure to understand
Firefly Optimization in scheduling and balancing the workloads under normal workload requests of the present system disclosed. This drawing is considered to understand the one of the optimization of the present system disclosed in this disclosure, the invention is not limited to this flow chart, and this illustration is provided to assist comprehension of the disclosure and should not be construed as restricting the depth, nature, or applicability of the disclosure.
[0015] Referring to Figure 4, illustrates the Flow Chart for Merge Sort PSO, in accordance with another exemplary embodiment of the present disclosure to understand another optimization used in the invention to handle the abnormal workload conditions of the present system disclosed. This drawing is considered to understand the flow of particle swarm optimization for effective task scheduling and load balancing of the present system disclosed in this disclosure, the invention is not limited to this drawing, and this illustration is provided to assist comprehension of the disclosure and should not be construed as restricting the depth, nature, or applicability of the disclosure.
DETAIL DESCRIPTION OF INVENTION
[0016] When considering the following full description of the invention, the invention will become more well-known, and objects other than those listed below will become clear. This description makes use of the appended drawings. When considering the following thorough description of the invention, the invention will become more well known, and objects other than those listed above will become clear. This description refers to the invention's accompanying drawings. The Embodiments of the current disclosure will now be identified utilizing the accompanying drawings as a guide. Embodiments are provided in order for a person versed in the art to fully appreciate the current disclosure. To offer a thorough understanding of embodiments of the current disclosure, several specifics relating to various components and processes are set out. As those versed in the art would recognize, the information provided in the embodiments should not be considered to limit the scope of the current disclosure. The order of stages revealed in this disclosure's procedure and the process should not be interpreted as mandating that they be carried out in the order described or represented. It's also worth noting that additional or alternative steps should be done.
[0017] Referring to Figure 1, illustrates General Resources Scheduling in Cloud Computing comprising of main elements such as Users (101), Load Balancer (102), Resource State (103), Rescheduling (104), Action (105), and Resource Pool (106), in accordance with another exemplary embodiment of the present disclosure to understand task scheduling and load balancing in cloud computing. This drawing is considered to understand the minimum steps required in cloud environment for task scheduling and load balancing. The Users (101) or the clients will send the requests to the cloud server to access the data from the cloud storage. The users (101) may send multiple requests to the cloud server and cloud server need to respond immediately and all requests must be scheduled if received. The Users (101) submits their tasks by some applications from anywhere and anytime. There are numerous Data Centers (DCs) available in the cloud to process the workloads, however because of the low latency and high response time, the nearest data centre is chosen, and the task is directed and completed there. If the task is not routed to the nearest DCs, latency time and other QoS (Quality of Service) metrics such as response time may be affected, resulting in a Service Level Agreement (SLA) violation. The Load Balancer (102) is composed of the following components: Resource Mapping, Resource Allocation, Task Scheduling, and Task Execution. Furthermore, in Resource Mapping, tasks are mapped to appropriate resources based on the QoS requirements specified in the user's SLA. The Cloud Task Management system uses a portal to store information about user tasks that are submitted, and it maintains a queue to manage and execute these tasks. The resources are allocated to tasks for further execution based on these details. A Task Handler in the data center receives the user task. In the Resource State (103), the task execution status is checked. If the number of Provisioned Resources (PrR) is less than the number of Required Resources (ReR), more resources are required. The operation is then rescheduled (104) and executed using additional resources from the Resource Pool (106). Only after successful execution of tasks in Action (105), the resources are released back to Resource Pool (106) and the Scheduler is prepared to execute upcoming user's tasks.
[0018] Referring to Figure 2, illustrates the present invention of the disclosure that is a Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud comprising of User Requests (201), Load Balancer (202), Load Predictor (203), Normal Loads (204), Abnormal Loads (205), Firefly Optimization (206), Merge Sort-PSO
(207), Task Assign (208), and Performance (209), in accordance with the main exemplary embodiment of the present disclosure. The User Requests (201) their tasks by some applications from anywhere and anytime. There are numerous Data Centers (DCs) available in the cloud to process the workloads, however because of the low latency and high response time, the nearest data centre is chosen, and the task is directed and completed there. If the task is not routed to the nearest DCs, latency time and other QoS (Quality of Service) metrics such as response time may be affected, resulting in a Service Level Agreement (SLA) violation. The Load Balancer (202) is composed of the following components: Resource Mapping, Resource Allocation, Task Scheduling, and Task Execution. Furthermore, in Resource Mapping, tasks are mapped to appropriate resources based on the QoS requirements specified in the user's SLA. The Cloud Task Management system uses a portal to store information about user tasks that are submitted, and it maintains a queue to manage and execute these tasks. The resources are allocated to tasks for further execution based on these details. Through the Load Balancer (202) only the clients or the User Requests (201) their tasks to the Cloud Server (CS) in Cloud platform. The User Requests (201) rate is to be determined upon the receiving the requests. The Load Predictor (203) determines the requests rate of the User Requests (201). The Density Long Short Term Memory (DLSTM) clustering method is used as Load Predictor (203) in the present disclosure. This clustering method predict the pattern of each incoming tasks requested by the Users (201). Each task will have different pattern and are to be predicted, need be clustered based on the similar pattern. If the task requests are less then the tasks are clustered as normal otherwise abnormal. The Normal Loads (204) are the normal workload requests received from the user (201) and the multiple requests from the user (201) are treated as Abnormal Loads (205). In Normal Loads (204) condition, the free virtual machines are available to provide the service whereas in Abnormal Loads (205) conditions may not.
[0019] The Firefly Optimization (206) and Merge-Sort PSO (207) are the multiple optimization methods used in the main embodiment to schedule the user's requested tasks and to equally distributing the workload among the Virtual Machines of the Cloud. The Scheduling of tasks and the balancing of the loads are effectively handled by the two optimization methods mentioned above in the present disclosure. When there are Normal workloads (204) requests are received, Firefly Optimization (206) shows the flashing fireflies to find the optimal number of Virtual Machines available in the Cloud Server of the Cloud platform. The Virtual Machine (VM)'s brightness is computed and ranked to assign the tasks. When there are abnormal workloads (205) requests are received, Merge-Sort PSO (207) uses dynamic resource allocation with merge-sort method to maintain the workloads balance. Initially, there are some VMs in each of the cluster. Generally all the clusters have equal size of VMs. Then, Merge-Sort PSO (207) computes Least Best (LB) values, which is associated with each of the cluster. Here LB refers to least loaded VM in the cluster. The value of GB Global Best (GB) is calculated as least value among all the LBs. The Merge-Sort PSO (207) searches for the best fit resource intended for user request and allocate the required resource to the user request. The Task Assign (208) to the optimal available virtual machine is carried by the Firefly Optimization (206) in case of the normal workload conditions. In case of abnormal workload conditions, the Merge-Sort PSO (207) assigns the tasks to the VMs. Both the optimization methods provide multistage scheduling of resources and balancing the workloads at different load requests. In the present invention, both the optimizers can have the ability of handling the any type of the workload balancing conditions. The invention can be made in such a way that if one optimizer fails, the second optimizer can also handle all different work load conditions. The major advantage of this invention is use of two optimizers as multistage. The optimizers can also be cascaded to facilitate the multi staging capability. The performance (209) metrics of the invention are the endorsement of the invention disclosed herein. The present invention disclosed herein showing the better performance in the form of Task Scheduling Efficiency of 98%, Task Scheduling Time of 46ms, and Energy Consumption of 56 Joules for the number of user requests 300.
[0020] Referring to Figure 3, illustrates the Flow Chart for Firefly Optimization, in accordance with another exemplary embodiment of the present disclosure to understand Firefly Optimization in scheduling and balancing the workloads under normal workload requests of the present system disclosed. When normal loads are detected by the Load Predictor (203), and load predictor determines whether the requested tasks Tq< Number of Virtual Machines (Nvms) then Firefly Optimization (206) is initialized to schedule the tasks to the VMs. The Firefly Optimizer (206) finds the brightness function of each VMs. The available VMs will be much brightened; the rank is assigned based on the brightness values of VMs. The top ranked VMs are selected and the tasks are schedule to them. If the schedule tasks are executed or once after assigning all the tasks, a new request has been made for assign to the MVs by the VMs. With minimized SLA violations, the user tasks are scheduled and balanced.
[0021] Referring to Figure 4, illustrates the Flow Chart for Merge Sort PSO, in accordance with another exemplary embodiment of the present disclosure to understand another optimization used in the invention to handle the abnormal workload conditions of the present system disclosed. When user sends their tasks on cloud, then the cloud broker has all the information of resource needs of tasks. We distribute these tasks in fixed size bags and consider tasks as bag of tasks and allocate it to selected VM, rather than allocating single task to single VM, as allocation of single task to single VM can tremendously increase the complexity of algorithm. The controller assigns the bag of tasks to selected GB in phase two and then executes it in the next phase. Further, for next task allocation the value of LB and GB needs to be updated. So, our proposed optimizer finds the value of LB only from that cluster that had GB in the last iteration. Therefore, updating only single cluster for finding the value of LB fastens the entire process. This value will then participate in calculation of GB in the next iteration. Then, the same process is repeated till all the tasks are allocated. After some time it is quite possible that all the VMs in clusters might change their status in terms of memory and CPU. So we need to change and refresh the values of all LBs and GB. So for this we are taking a counter variable. Whenever the counter variable reaches to an arbitrary max value, then it updates all LBs and then GB will be calculated consequently. This step makes the algorithm more unique and also responsible for finding global optimized solution. Therefore, the algorithm never ends in local best solution. The updated LBs and GB will be considered as input and given to the system for subsequent iterations for execution of tasks.
[0022] In order to provide a more detailed understanding of embodiments of the invention; some specific details are laid out in the present exemplary description. An ordinary skilled artisan, on the other hand, may see that the existing innovation can be implemented without including any of the precise data presented here. The main embodiments of the present disclosure are considered with a Novel Multi-Level
Optimization for Task Scheduling and Load Balancing in Cloud provides multi-level optimization in scheduling user's task efficiently and distributing the load uniformly. In normal and abnormal workload conditions, this invention shares the overall workloads among all the Virtual Machines (VMs) in the cloud to maintain Load Balancing, the method and the way of the present embodiment is provided in the above layout and it shall not limit the scope of the present disclosure. The present invention is described with the limited number of the embodiments and accordingly, the present disclosure should be limited to the claims specified in the claims of the invention.

Claims (5)

A NOVEL MULTI-LEVEL OPTIMIZATION FOR TASK SCHEDULING AND LOAD BALANCING IN CLOUD CLAIMS We claim:
1. A Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud comprising of User Requests (201), Load Balancer (202), Load Predictor (203), Normal Loads (204), Abnormal Loads (205), Firefly Optimization (206), Merge Sort PSO (207), Task Assign (208), and Performance (209); provides multi-level optimization for Task Scheduling and Load Balancing in Cloud environment.
2. A Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud as claimed in claim 1, wherein the Density Long Short Term Memory (DLSTM) clustering method is used as Load Predictor to predict the pattern of each incoming tasks requested by the Users, and DLSTM classify the incoming tasks as normal and abnormal loads.
3. A Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud as claimed in claim 1, wherein the flashing fireflies in Firefly Optimization indicate how to identify the optimal number of Virtual Machines (VMs) in the Cloud Server (CS) of the Cloud platform, Firefly Optimization calculates and ranks the brightness of the Virtual Machine (VM) in order to assign tasks.
4. A Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud as claimed in claim 1, wherein Merge-Sort PSO uses dynamic resource allocation with merge-sort method to maintain the workloads balance, there are some VMs in each of the cluster have equal size of VMs; Merge-Sort PSO computes Least Best (LB) values, which is associated with each of the cluster. Here LB refers to least loaded VM in the cluster. The value of GB Global Best (GB) is calculated as least value among all the LBs. The Merge-Sort PSO searches for the best fit resource intended for user request and allocate the required resource to the user request.
5. A Novel Multi-Level Optimization for Task Scheduling and Load Balancing in Cloud as claimed in claim 1, wherein optimizers can also be cascaded to facilitate the multi staging capability, present invention disclosed herein showing the better performance in the form of Task Scheduling Efficiency of 98%, Task Scheduling Time of 46ms, and Energy Consumption of 56 Joules for the number of user requests 300.
AU2021103249A 2021-06-10 2021-06-10 A novel multi-level optimization for task scheduling and load balancing in cloud Ceased AU2021103249A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021103249A AU2021103249A4 (en) 2021-06-10 2021-06-10 A novel multi-level optimization for task scheduling and load balancing in cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021103249A AU2021103249A4 (en) 2021-06-10 2021-06-10 A novel multi-level optimization for task scheduling and load balancing in cloud

Publications (1)

Publication Number Publication Date
AU2021103249A4 true AU2021103249A4 (en) 2021-09-23

Family

ID=77745972

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021103249A Ceased AU2021103249A4 (en) 2021-06-10 2021-06-10 A novel multi-level optimization for task scheduling and load balancing in cloud

Country Status (1)

Country Link
AU (1) AU2021103249A4 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114580864A (en) * 2022-02-21 2022-06-03 石河子大学 Multi-element energy storage distribution method, system and equipment for comprehensive energy system
WO2024033912A1 (en) * 2022-08-08 2024-02-15 Esh Os Ltd Anonymous centralized transfer or allocation of resources

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114580864A (en) * 2022-02-21 2022-06-03 石河子大学 Multi-element energy storage distribution method, system and equipment for comprehensive energy system
CN114580864B (en) * 2022-02-21 2023-12-05 石河子大学 Multi-element energy storage distribution method, system and equipment for comprehensive energy system
WO2024033912A1 (en) * 2022-08-08 2024-02-15 Esh Os Ltd Anonymous centralized transfer or allocation of resources

Similar Documents

Publication Publication Date Title
Shojafar et al. FUGE: A joint meta-heuristic approach to cloud job scheduling algorithm using fuzzy theory and a genetic method
Lin et al. A threshold-based dynamic resource allocation scheme for cloud computing
EP1089173B1 (en) Dynamic adjustment of the number of logical processors assigned to a logical partition
AU2021103249A4 (en) A novel multi-level optimization for task scheduling and load balancing in cloud
Venkataraman Threshold based multi-objective memetic optimized round robin scheduling for resource efficient load balancing in cloud
Natesan et al. Optimal task scheduling in the cloud environment using a mean grey wolf optimization algorithm
Supreeth et al. An efficient policy-based scheduling and allocation of virtual machines in cloud computing environment
Sonkar et al. A review on resource allocation and VM scheduling techniques and a model for efficient resource management in cloud computing environment
Selvi et al. Resource allocation issues and challenges in cloud computing
Hwang et al. Resource allocation policies for loosely coupled applications in heterogeneous computing systems
Seth et al. Dynamic threshold-based dynamic resource allocation using multiple VM migration for cloud computing systems
CN115599512A (en) Scheduling jobs on a graphics processing unit
Emara et al. Genetic-Based Multi-objective Task Scheduling Algorithm in Cloud Computing Environment.
Ramezani et al. Task based system load balancing approach in cloud environments
Gadhavi et al. Efficient resource provisioning through workload prediction in the cloud system
Chouhan et al. Energetic SSource allotment scheme for cloud computing using threshold-based
Haque et al. A priority-based process scheduling algorithm in cloud computing
Suresh et al. System Modeling and Evaluation on Factors Influencing Power and Performance Management of Cloud Load Balancing Algorithms.
Peer Mohamed et al. An efficient framework to handle integrated VM workloads in heterogeneous cloud infrastructure
Florence et al. Energy aware load balancing for computational cloud
Radhika et al. Load balancing in cloud computing using support vector machine and optimized dynamic task scheduling
Alamelu et al. Enhanced Multi-Objective based Resource Allocation using Framework Creation in Cloud Computing
Zheng et al. Energy-efficient statistical live virtual machine placement for big data information systems in cloud computing environments
Hung et al. A proposed load balancer using naïve bayes to enhance response time on cloud computing
Nema et al. Vm consolidation technique for green cloud computing

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry