CN116302519A - Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform - Google Patents

Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform Download PDF

Info

Publication number
CN116302519A
CN116302519A CN202310199403.2A CN202310199403A CN116302519A CN 116302519 A CN116302519 A CN 116302519A CN 202310199403 A CN202310199403 A CN 202310199403A CN 116302519 A CN116302519 A CN 116302519A
Authority
CN
China
Prior art keywords
task
workflow
container
tasks
instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310199403.2A
Other languages
Chinese (zh)
Inventor
王建东
李泽玉
李光夏
李云豆
张志为
严煜昆
刘磊
李烨城
彭勇毅
宋子健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Original Assignee
Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology filed Critical Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Priority to CN202310199403.2A priority Critical patent/CN116302519A/en
Publication of CN116302519A publication Critical patent/CN116302519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention belongs to the technical field of cloud computing data processing, and discloses a micro-service workflow elastic scheduling method, a system and equipment based on a container cloud platform, wherein the method comprises the following steps: dividing the workflow into DSSW and RIBW; dividing the workflow expiration date into sub expiration dates of each task of the workflow; sequencing tasks in the DSSW and RIBW workflow ready queues; adopting different allocation strategies to allocate tasks in DSSW and RIBW workflows to an existing container instance or a new container created to obtain a scheduling scheme; and determining the mapping relation between the container and the VM by adopting a new container instance deployment algorithm of multi-objective optimization to obtain a deployment scheme. The method and the device can rapidly process the user request of the Internet of things system deployed by adopting the container cloud platform and return the result, reduce the resource use cost under the condition of ensuring the user SLA and improve the resource utilization rate.

Description

Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform
Technical Field
The invention belongs to the technical field of cloud computing data processing, and particularly relates to a micro-service workflow elastic scheduling method, system and equipment based on a container cloud platform.
Background
At present, micro services serve as core technologies provided by internet of things platform services and support diversified internet of things applications. Application Service Providers (ASPs) divide large-scale workflow applications into a large number of fine-grained, low-coupling micro-service tasks through a micro-service architecture, so that the tasks are quickly and independently scheduled and deployed according to different user requirements, and the flexibility and agility of the workflow applications are improved. Task scheduling and elastic expansion are key technologies of workflow scheduling in a container cloud, and workflow with data dependency constraint is scheduled to a relevant container for processing, and elastic expansion is performed according to real-time requirements to meet service quality. Considering that cloud resource lease cost is minimized under the constraint of the deadline of a micro-service workflow, how to schedule tasks and flexibly stretch and retract on a container cloud platform is a key problem to be solved by an application program of the Internet of things based on a micro-service architecture in the cloud.
Systems for common internet of things applications such as smart parks, smart hospitals, etc. mainly comprise two types of micro-service workflows, namely delay-sensitive streaming workflows (DSSW) and data-intensive batch workflows (RIBW). Most of the existing workflow scheduling algorithms only pay attention to DSSW scheduling or RIBW dynamic scheduling, and have no corresponding joint scheduling strategies, so that idle instance time of a container cannot be fully utilized, and resource waste is caused. For the dynamic nature of micro-service workflows, i.e., task arrival times and execution times, with uncertain characteristics, existing methods often ignore these uncertainties, resulting in exceeding the workflow's deadlines resulting in increased execution costs. Aiming at the elastic expansion of micro services in cloud, most of workflow scheduling algorithms are only suitable for single-layer architecture (VM or container), task scheduling and elastic expansion cannot be simultaneously carried out on a container layer and a virtual machine layer, the difference exists between the prediction result of the elastic expansion algorithm without combining task scheduling and the actual scheduling resource amount, and the accurate resource expansion requirement cannot be obtained, so that the service performance is affected.
Through the above analysis, the problems and defects existing in the prior art are as follows: the prior art can not solve the problem that the optimal flexible scheduling decision is difficult due to the dynamic characteristic of the micro-service workflow, and the system service request is optimized by aiming at the system scene of the Internet of things without a corresponding joint scheduling strategy; the existing scheduling method has the defects of high scheduling cost, low performance and inconsistent with actual resources, and can influence subsequent tasks.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a method, a system and equipment for elastically scheduling micro-service workflow based on a container cloud platform.
The invention is realized in such a way, a micro-service workflow elastic scheduling method based on a container cloud platform comprises the following steps:
firstly, dividing a workflow into a delay-sensitive type flow workflow DSSW and a resource-intensive batch processing workflow RIBW according to micro-service workflow meta-information; calculating the sub-expiration date of each task in the workflow, and adding the sub-expiration date into a corresponding task pool;
secondly, adding ready tasks in the DSSW and RIBW workflow task pools into a ready queue and sequencing; different allocation strategies are adopted to allocate or create containers for the DSSW and the RIBW workflows, and a task scheduling scheme is obtained;
and finally, determining the mapping relation between the container and the VM by adopting a new container instance deployment algorithm of multi-objective optimization to obtain a deployment scheme.
Further, the micro-service workflow elastic scheduling method based on the container cloud platform comprises the following steps:
step one, classifying the workflow into a delay-sensitive streaming workflow DSSW and a data-intensive batch workflow RIBW according to whether the parameter in the workflow, namely the deadline required by a user, is a hard period; respectively adding tasks in the two types of workflows into corresponding task pools, respectively traversing the tasks in the task pools, respectively adding ready tasks into corresponding ready queues, and deleting the tasks from the task pools; assigning task in the workflow to a sub-deadline;
and step two, sequencing tasks in the DSSW workflow and RIBW workflow ready queues, wherein the ready tasks are tasks of which direct predecessor tasks are all completed, and the entrance tasks are ready tasks directly. Calculating the scheduling urgency of each ready task in a task pool according to the number of unassigned tasks on a critical path from a current task to an exit task in the task pool and the expected completion time of the tasks, and arranging the tasks in an order of non-descending urgency; non-descending order sorting is carried out on tasks in the RIBW according to the earliest starting time of the tasks;
and thirdly, allocating container examples for tasks in the DSSW and RIBW ready queues.
Further the task allocation process comprises:
for DSSW: the service instance is selected according to the following three policies. 1) If there are instances in the free container set that can complete the task on time, the least costly instance is selected. 2) If a new instance is needed, the minimum speed minSpeed of the new container instance meeting the demand is calculated, and whether the minSpeed is greater than the speed of the highest ranked VM or negative is compared, if so, the task is assigned to the existing or created instance with the highest configuration, and the task is assigned to the instance with the smaller earliest end time EFT. If not, an instance is created with a speed slightly greater than minSpeed, typically configured as an integer multiple of discrete units, e.g., 500m for CPU and 512Mi for memory, and tasks are assigned to the instance. 3) If a new instance is created in a certain virtual machine and needs to be mirrored, firstly pulling the mirror and starting a container to execute tasks;
for RIBW: calculating the cost of executing tasks on each running container on the VM, and selecting a proper container instance for the tasks with minimum cost and idle time;
and step four, feeding back the state after the task execution is completed. If the task is
Figure BDA0004108533850000031
Execution is completed, uncertainty of execution time and data transmission time will not exist anymore, and +.>
Figure BDA0004108533850000032
Is>
Figure BDA0004108533850000033
Update instance msi j,k Ready time CRT (msi) j,k ). The sub-deadlines sd are divided by using the predicted execution time of the highest-ranking container, i.e. the execution time of the unassigned specific instance, in the workflow initialization, so that the sub-deadlines of the tasks in the task pool need to be updated after the task execution is finished. If the direct predecessor task of the successor task is completed and the transmission data after the execution of the direct predecessor task is received, removing the successor task from the task pool and adding the successor task into a ready queue, updating the ready time task of the successor task, and continuing to execute the steps two to four until the tasks in the two types of task pools are completed, so as to obtain a Scheduling scheme;
and fifthly, determining a mapping relation between the new container and the VM, and deploying the created new container on the virtual machine.
Further, the determining the mapping relationship between the container and the VM includes:
sorting new containers to be deployed according to the sequence of the required resource amount from large to small, traversing all VM (virtual machine) storing the required mirror image, and selecting the VM with the minimum gap between the residual resources of the virtual machine and the resources required by the containers through an optimal matching algorithm;
if all new containers cannot be deployed by the existing VM resources, renting the new VM to accommodate the new containers, modeling the deployment of the remaining containers as a vector boxing problem with variable size, and obtaining a deployment scheme by setting multi-resource guarantees as main targets of a mapping scheme, balancing load balancing and dependency perception; the multi-resource guarantee is that the resource requirement of a container in the same VM should not exceed the capacity thereof; the container dependency is perceived as placing containers with dependencies as far as possible on one VM; the multi-objective tradeoff is to maximize the number of containers that have a dependency on one virtual machine while keeping the value of maximum resource utilization to a minimum.
If all container instances cannot be deployed on the existing VM instance, renting X new virtual machines to enable the total resource amount to be larger than resources required by the remaining containers, wherein the new virtual machines can be of multiple different types or the same type in a set, and then calling the deployment method again until all container instances to be deployed are deployed to obtain the telescopic scheme Autoscaling.
Another object of the present invention is to provide a container cloud platform-based micro service workflow elastic scheduling system for implementing the container cloud platform-based micro service workflow elastic scheduling method, the container cloud platform-based micro service workflow elastic scheduling system comprising:
the workflow initialization module analyzes the request of a system user to the Internet of things system based on the micro-service architecture into task requests of the micro-services which cooperate with each other, and each request classifies the workflow into a delay-sensitive type streaming workflow DSSS and a data-intensive batch workflow RIBW according to the expiration date submitted by the user; adding different types of tasks into task pools which are respectively realized by using Kafka message queues, and if the tasks are ready, consuming the message queues of Topic which are arranged in the task pools and adding the message queues into ready queues; assigning a sub-deadline according to the expected execution time PET of the task in the workflow, namely the execution time of the task on the highest-ranking container instance and the rank of the task;
the task ordering module is used for ordering tasks in the delay-sensitive streaming workflow DSSW and data-intensive batch processing workflow RIBW ready queues;
the task allocation module is used for consuming the information from the Topic set by the ready queue, adopting different strategies for different tasks, allocating or creating an instance for minimizing the task processing cost for the DSSW and calculating the executing cost of all containers for the RIBW, and selecting a proper container instance for the task according to the minimum cost and idle time;
and the online adjustment module is used for updating the meta information of the task in the ready queue and the ready time of the subsequent task and the container instance in time after the task execution is completed.
The container deployment module is used for determining the mapping relation between the container and the VM and deploying the created new container to the virtual machine;
it is a further object of the present invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the container cloud platform based micro-service workflow resilient scheduling method.
It is another object of the present invention to provide a computer readable storage medium storing a computer program, which when executed by a processor, causes the processor to perform the steps of the container cloud platform based micro-service workflow resilient scheduling method.
The invention further aims to provide an information data processing terminal which is used for realizing the micro-service workflow elastic scheduling system based on the container cloud platform.
In combination with the technical scheme and the technical problems to be solved, the technical scheme to be protected has the following advantages and positive effects:
first, aiming at the technical problems in the prior art and the difficulty in solving the problems, the technical problems solved by the technical proposal of the invention are analyzed in detail and deeply by tightly combining the technical proposal to be protected, the results and data in the research and development process, and the like, and some technical effects brought after the problems are solved have creative technical effects. The specific description is as follows:
the method and the device can rapidly process the user request of the Internet of things system deployed by adopting the container cloud platform and return the result, reduce the resource use cost under the condition of ensuring the user SLA and improve the resource utilization rate.
The invention discloses an elastic scheduling method of a micro-service workflow in a container cloud platform, which adopts a container as a carrier for micro-service operation, isolates the interference of an operation environment through the container, and greatly reduces the initialization time of the micro-service by utilizing the characteristics of quick deployment and operation of the container and improves the execution efficiency.
Aiming at the dynamic characteristics of the micro-service workflow, the invention designs a task feedback mechanism, and the task feedback mechanism can be mapped and allocated to service examples only when the task is ready, and all the tasks which are not ready are placed in a task pool. And after the current task is executed, the task state in the task pool is adjusted in real time according to the actual execution condition. And reducing the influence on the subsequent mapping task under the condition that the task execution time or the data transmission time is uncertain.
Aiming at the scheduling problem of two types of common workflows (namely delay sensitive workflows and data intensive workflows) in an application scene based on the Internet of things, the invention designs a joint scheduling strategy for preferentially scheduling the delay sensitive workflows and processing the data intensive workflows in the idle time of a container.
The invention focuses on the problems of micro-service task scheduling and container elastic telescoping in the container cloud platform, ensures that a scheduling algorithm schedules tasks to an instance with minimized cost as much as possible on the basis of guaranteeing QoS of users, predicts the CPU load condition at the future moment according to the current service instance state, and makes an optimal telescoping decision so as to increase the scheduling success rate and reduce the resource renting cost.
Secondly, the technical scheme is regarded as a whole or from the perspective of products, and the technical scheme to be protected has the following technical effects and advantages:
the invention can rapidly process the user request in the common Internet of things system and return the result, thereby reducing the cost and improving the resource utilization rate under the condition of ensuring the service quality of the user; for the continuously-increased service demands and the continuously-fluctuating system load, the invention can make an optimal elastic expansion decision for the container instance in advance according to the real-time situation, thereby meeting the service performance demands of users; the method has important practical significance on application systems of the Internet of things such as intelligent parks, intelligent hospitals and the like.
Thirdly, as inventive supplementary evidence of the claims of the present invention, the following important aspects are also presented:
the technical scheme of the invention overcomes the technical bias: the prior majority of workflow scheduling algorithms are only suitable for single-layer architecture (VM or container) and cannot simultaneously perform task scheduling and elastic expansion at a container layer and a virtual machine layer, the invention combines task scheduling and elastic expansion of micro-services, uses heuristic task scheduling algorithms based on deadline distribution and urgency, considers the influence of container mirror image on container initialization time, provides optimal scheduling and expansion decision by utilizing a future load sequence generated by the micro-service load prediction method provided by the invention, and acquires a scheduling scheme and a new instance to be expanded; using a container scheduling algorithm based on multi-objective optimization, deploying on the existing VM by adopting a best matching strategy, and if the residual resources of the existing VM are insufficient, renting a new VM to deploy the residual container instance. The trade-off between container dependence sensing and load balancing is realized under the condition that the VM resource limitation is met, and a scaling scheme is obtained;
the existing majority of workflow scheduling algorithms only pay attention to LSOW scheduling or DIOW dynamic scheduling, and cloud resources cannot be fully utilized, different scheduling strategies are designed for two types of workflows of the Internet of things system, idle time of a container is fully utilized, and the utilization rate of resources is improved;
the existing micro-service workflow scheduling method often ignores uncertainty in the cloud, so that the uncertainty is propagated under the condition of uncertainty of task execution time or data transmission time, and the follow-up task to be executed is affected. The invention designs a task feedback mechanism, which is mapped and allocated to a service instance only when the task is ready, and all the tasks which are not ready are placed in a task pool. And after the current task is executed, the task state in the task pool is adjusted in real time according to the actual execution condition. And reducing the influence on the subsequent mapping task under the condition that the task execution time or the data transmission time is uncertain.
Drawings
FIG. 1 is a flow chart of a method for elastically scheduling micro-service workflow based on a container cloud platform according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a cloud container service platform architecture implemented by a micro service workflow elastic scheduling method based on a container cloud platform according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of cluster management provided by an embodiment of the present invention;
FIG. 4 is a cluster overview monitor view provided by an embodiment of the invention;
FIG. 5 is a diagram of Pod information for all running applications in a platform provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of node information provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of a node operating state according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a scheduler schedule provided by an embodiment of the present invention;
FIG. 9 is a diagram of user request status monitoring for system APIs provided by an embodiment of the present invention;
fig. 10 is a diagram of a system performance verification result provided by an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, the method for elastically scheduling the micro-service workflow based on the container cloud platform provided by the embodiment of the invention comprises the following steps:
s101, analyzing a user request into a micro-service workflow, and classifying the workflow into a delay-sensitive streaming workflow DSSW and a data-intensive batch workflow RIBW according to an expiration date submitted by a user; respectively adding tasks in the two types of workflow sets to corresponding task pools, wherein ready tasks are added to ready queues; assigning task in the workflow to a sub-deadline;
s102, non-descending order is carried out on tasks in the DSSW according to scheduling urgency; non-descending order sorting is carried out on tasks in the RIBW according to the earliest starting time of the tasks;
s103, distributing or creating an instance minimizing task processing cost for the DSSW, calculating the cost of executing all containers for the RIBW, and selecting a proper container instance for the task with the minimum cost and idle time;
s104, updating the state of the task in the ready queue in time after the task is executed, adding the task in the task pool into the ready queue after the task is ready, and executing step S103 again on the task in the ready queue until the task pool and the ready queue are empty;
s105, determining the mapping relation between the container and the VM, and deploying the new container to be created on the virtual machine.
The specific process of S101 provided by the embodiment of the invention is as follows:
modeling a complete micro service chain required by a user's request to a system as a micro service workflow, first according to the workflow W S Parameter in- -user requested deadline D S Distinguishing whether a workflow is a LSOW workflow or a DIOW workflow for a hard deadline and adding to a corresponding set W on Or W off Then give W on The task allocation sub-deadlines in (1) are added to the task pool TP on In (W) off The middle task is directly added to the task pool TP off Is a kind of medium. Finally traversing task pools TP respectively on And TP off The ready tasks are respectively added into the corresponding ready queues Q on And Q off And delete the task from the task pool.
The dependency relationship between tasks is captured by directed acyclic graph DAG, sub-deadlines of tasks and expected execution times of tasks and rank r of tasksank related, similar to critical paths in DAG graphs, rank represents a task
Figure BDA0004108533850000091
To the exit task
Figure BDA0004108533850000092
Task +.>
Figure BDA0004108533850000093
The sub-deadline allocation is as follows:
Figure BDA0004108533850000094
wherein the method comprises the steps of
Figure BDA0004108533850000095
For tasks->
Figure BDA0004108533850000096
The expected execution time when no specific instance is allocated, setting +.>
Figure BDA0004108533850000097
For tasks->
Figure BDA0004108533850000098
Execution time on highest ranked container, +.>
Figure BDA0004108533850000099
Is a random variable subject to normal distribution, then in task->
Figure BDA00041085338500000910
The expected execution time when no specific instance is allocated is defined as:
Figure BDA00041085338500000911
wherein the method comprises the steps of
Figure BDA00041085338500000912
And->
Figure BDA00041085338500000913
Is a random variable +.>
Figure BDA00041085338500000914
Expected values and variances. Task->
Figure BDA00041085338500000915
The rank of (2) is shown as follows:
Figure BDA00041085338500000916
wherein the method comprises the steps of
Figure BDA00041085338500000917
For tasks->
Figure BDA00041085338500000918
Is a direct precursor task set of->
Figure BDA00041085338500000919
For tasks->
Figure BDA00041085338500000920
And->
Figure BDA00041085338500000921
The length of time between data transmissions, +.>
Figure BDA00041085338500000922
Is 0.
The specific process of S102 provided by the embodiment of the invention is as follows:
the specific process for sequencing the tasks in the DSSW workflow and the RIBW workflow task pool provided by the embodiment of the invention comprises the following steps:
1. for W on
The scheduling urgency of each ready task in the ready queue is calculated to serve as a basis for task sequencing, wherein the ready task is a task of which all direct predecessor tasks are completed, and the entrance task is the ready task directly. The formula for calculating the urgency is as follows:
Figure BDA0004108533850000101
wherein the method comprises the steps of
Figure BDA0004108533850000102
Representing from->
Figure BDA0004108533850000103
To->
Figure BDA0004108533850000104
The number of unassigned tasks on the critical path, +.>
Figure BDA0004108533850000105
Task of unassigned execution instance->
Figure BDA00041085338500001017
Is the earliest completion time expected, +.>
Figure BDA0004108533850000106
And->
Figure BDA0004108533850000107
The smaller the gap between them, the task +.>
Figure BDA0004108533850000108
The greater the risk of exceeding the sub-deadline. The more subsequent tasks need to be arranged, the task +.>
Figure BDA0004108533850000109
The greater the impact of the delay on the subsequent task, resulting in subsequent task delay propagation increasing the maximum completion time of the workflow. Thus->
Figure BDA00041085338500001010
The smaller the value of (2), the task->
Figure BDA00041085338500001011
The higher the urgency of (c). Wherein->
Figure BDA00041085338500001012
The definition is as follows:
Figure BDA00041085338500001013
wherein the method comprises the steps of
Figure BDA00041085338500001014
For tasks->
Figure BDA00041085338500001015
The earliest start time when no specific instance is assigned is shown by the following formula:
Figure BDA00041085338500001016
2. for W off
Non-descending ordering is performed according to the earliest start time of the tasks to save idle time of the service instance.
The specific process of S103 provided by the embodiment of the invention is as follows:
task allocation requires priority to be handled on existing service instances and creating new service instances when the existing instances are insufficient for service performance.
1. For W on
The service instance is selected according to the following three policies. 1) If there are instances in the free container set that can complete the task on time, the least costly instance is selected. 2) If a new instance is needed, the minimum speed minSpeed of the new container instance meeting the demand is calculated, and whether the minSpeed is greater than the speed of the highest ranked VM or negative is compared, if so, the task is assigned to the existing or created instance with the highest configuration, and the task is assigned to the instance with the smaller earliest end time EFT. If not, an instance is created with a speed slightly greater than minSpeed, typically configured as an integer multiple of discrete units, e.g., 500m for CPU and 512Mi for memory, and tasks are assigned to the instance. 3) If a new instance is created in a virtual machine that requires mirroring, the mirror is first pulled and the container is started to perform tasks.
To determine whether a service instance can meet task performance requirements, it is a set of all instances capable of executing the task
Figure BDA0004108533850000111
The scheduling buffer time is calculated, and the calculation formula is shown as follows:
Figure BDA0004108533850000112
wherein the method comprises the steps of
Figure BDA0004108533850000113
For tasks->
Figure BDA0004108533850000114
In example msi j,k Expected completion time in the last case if->
Figure BDA0004108533850000115
A non-negative value indicates that the task can be completed before the sub-deadline, and then calculate the task +.>
Figure BDA0004108533850000116
Assigning to msi j,k The resulting cost increase value->
Figure BDA0004108533850000117
And assign tasks to +.>
Figure BDA0004108533850000118
Minimal example. The calculation formula is shown as follows:
Figure BDA0004108533850000119
in which cost (msi) j,k ) And cost' (msi) j,k ) Respectively, to assign tasks t i s Before and after the cost of execution. Since the system user does not directly calculate the time of use of the container for payment, but directly rents the VM, and since multiple containers can share one VM, this chapter assumes that the fee is proportionally charged according to the resource limitation. For example, if two containers share one VM and each container uses half of the VM's resources, each container only needs to pay half of the total price of the billing cycle. And setting the limit of each resource according to the maximum proportion, for example, the task corresponding to the container needs one-fourth of the total memory needed by half of the total number of the CPU cores of the virtual machine, and charging by adopting the resource with the high proportion. Instance msi j,k The cost calculation formula of (2) is shown as follows:
Figure BDA00041085338500001110
wherein the price is j,k For container c j,k Cost per unit time of DST (c) j,k ,vm x ) And DFT (c) j,k ,vm x ) Containers c, respectively j,k At vm x Start and end deployment times on. If all of
Figure BDA00041085338500001111
The values are negative, indicating that no instances of the sub-expiration date constraint are satisfied, the minimum speed minSpeed of the new instance required to satisfy the sub-expiration date is calculated and selected according to the strategy described above based on its value. minSpeeThe calculation formula of d is shown as follows:
Figure BDA00041085338500001112
wherein w is i For the task
Figure BDA00041085338500001113
Is calculated by the calculation amount of Init (msi) j,k ) For example msi j,k Initialization time.
2. For W off
Executing tasks on containers for each run on a compute VM
Figure BDA0004108533850000121
With minimum costs and idle time as tasks
Figure BDA0004108533850000122
Selecting a suitable container instance, wherein the idle time is denoted task->
Figure BDA0004108533850000129
Ready time and container instance msi j,k The idle time between ready times is defined as follows:
Figure BDA0004108533850000123
wherein the method comprises the steps of
Figure BDA0004108533850000124
For tasks->
Figure BDA0004108533850000125
Is ready for CRT (msi) j,k ) For example msi j,k Is a ready time for a read operation.
The specific process of S104 provided by the embodiment of the invention is as follows:
if the task is
Figure BDA0004108533850000126
Execution is completed, uncertainty of execution time and data transmission time will not exist anymore, and +.>
Figure BDA0004108533850000127
Is>
Figure BDA0004108533850000128
Update instance msi j,k Ready time CRT (msi) j,k ). Since the predicted execution time of the highest ranked container, i.e. the execution time of the unassigned specific instance, is used in the workflow initialization to divide the sub-deadline sd, the sd of the task in the task pool needs to be updated after the task execution is completed. If the direct predecessor task of the successor task is completed and the transmission data after the execution of the direct predecessor task is received, the successor task is removed from the task pool and added into the ready queue, the ready time task of the successor task is updated, and S103 is executed again for the task in the ready queue until the task pool and the ready queue are empty.
The specific process of S105 provided by the embodiment of the invention is as follows:
the new container created needs to be further deployed onto the virtual machine, not to occupy all the resources of one VM, but to share the VM with other containers. The scaling algorithm requires determining the mapping of containers to VMs. First, the new containers to be deployed are ordered in order of the required amount of resources from large to small, then all VMs storing the required images are traversed, and one VM is selected by a best matching (BF) algorithm that selects the VM with the smallest gap between the remaining resources of the virtual machine and the resources required by the containers. If the existing VM resources are insufficient to deploy all new containers, then the new VM is leased to accommodate the new containers, and the deployment of the remaining containers is modeled as a variable size vector boxing problem.
Given VM type set vmt= { VMT i Size and price of each VM type is R (VMT) i ) And price i . Definition r= { R 1 ,r 2 …r |R| R is a resource type (e.g., CPU, memory, or bandwidth). For the followingvm i Definition of V i =(V i 1 ,V i 2 ,…,V i |R| ) Is a vector of resource capacities, V i j For virtual machine vm i Upper available resource r j Since a resource capacity vector is defined for each VM separately, it can be explicitly represented as heterogeneous virtual machines in the cluster. The previous task scheduling algorithm gives the set of containers to be deployed newMSI= { c 1 ,c 2 ,…,c |N| And deployed to M virtual machines. For container c i Definition of
Figure BDA0004108533850000131
For its resource demand vector, +.>
Figure BDA00041085338500001310
Indicating container c i Demand resource r j Is not shown in the drawing). For a microservice application, let matrix +.>
Figure BDA0004108533850000132
Representing dependency relationships between containers, e.g.>
Figure BDA0004108533850000133
Indicating container c i Dependent on c j . Deployment scenario implies a mapping of containers to VMs on a cluster, let matrix +.>
Figure BDA0004108533850000134
Representing a deployment scenario, wherein if container c i Deployed at virtual machine vm i On (I)>
Figure BDA0004108533850000135
1, otherwise->
Figure BDA0004108533850000136
Is 0.
The deployment scenario can meet the following three goals:
1) The multi-resource guarantee, the resource requirements of different types in the virtual machine should ensure that the deployment scheme does not violate the SLA, i.e. the resource requirements of containers in the same VM should not exceed the capacity, and the definition is as follows:
Figure BDA0004108533850000137
2) Load balancing, since container-based virtualization is based on process isolation, there is a potential resource contention, and high utilization of resources may reduce performance of services on containers, so it is necessary to balance the load of a cluster to alleviate the resource contention, and the load balancing state of the whole container cluster is represented by the ratio of the maximum resource utilization. Definition of the definition
Figure BDA00041085338500001311
For resource r k In virtual machine v j The above utilization ratio is defined as the following formula:
Figure BDA0004108533850000138
the maximum resource utilization is defined as U max
Figure BDA0004108533850000139
To balance the load, one should find a minimum U max Is a deployment scheme of min (U) max ) The workload on the cluster is more balanced because the ratio of maximum resource utilization is smaller.
3) Dependency awareness, containers with dependencies are placed on the same virtual machine because the containers can utilize the loopback interface to achieve high network performance without consuming the actual network resources on the same virtual machine. The number of dependent containers in a VM is defined as L dep The goal of relying on perception is to maximize this value.
Figure BDA0004108533850000141
max(L dep )
Because of the limited resource capacity on each virtual machine, it is often not possible to place all containers with dependencies on the same virtual machine. Multi-resource guarantees are thus set as the main goal of the mapping scheme, and a trade-off is made between load balancing and dependency awareness goals.
To sum up, the overall goal of the method is to meet the micro-service workflow deadline while minimizing cloud resource costs and improving scheduling success rate.
Figure BDA0004108533850000142
min cost
makespan s +a s ≤D s
Figure BDA0004108533850000143
Where ratio is the scheduling success rate of the micro-service workflow,
Figure BDA0004108533850000144
for tasks->
Figure BDA0004108533850000145
Actual start time of subsequent task, succ s To schedule a successful LSOW number, |W on I is the total number of scheduling LSOWs.
In order to prove the inventive and technical value of the technical solution of the present invention, this section is an application example on specific products or related technologies of the claim technical solution.
The micro-service workflow elastic scheduling based on the container cloud platform provided by the embodiment of the invention is applied to computer equipment, wherein a system architecture diagram of the cloud container service platform integrated to the Kubernetes cluster according to the algorithm provided by the invention is shown in figure 2. The micro-service task scheduling and container elastic telescoping scheme provided by the invention corresponds to a telescoping execution module in the figure, serves as a part of a management layer, collects service instance running state data of an infrastructure layer, optimizes task scheduling and container elastic telescoping processes of applications deployed in a platform based on a micro-service architecture, and reduces cost while meeting the service requirement of a user request. The running part of the items are shown in fig. 3, and the resource use condition of each node in the cluster is shown in fig. 7, such as the indexes of CPU use rate, CPU average load, memory use rate, container group use amount and the like.
In order to optimally manage the application based on the micro-service architecture, the system disclosed by the invention is deployed into a cluster to serve as an operation platform of the micro-service item. The platform can meet the performance requirements of users, simultaneously minimize the service cost, support the real-time state monitoring visualization of clusters and nodes, and simplify the workflow of the DevOps. At present, a cloud container service platform is accessed into an intelligent building comprehensive management and control system and an intelligent hospital operation and maintenance platform, and provides services for Internet of things system users based on a micro-service architecture through a good scheduling and elastic telescoping strategy.
Because there is a large number of data fusion between existing subsystems in the two internet of things systems, and each subsystem data acquisition needs to be acquired from the internet of things equipment, such operations need to be coordinated with each other between the tasks. The service types provided comprise real-time monitoring, equipment control, fault diagnosis and the like, the load capacity is large, the utilization rate of computing and storage resources is high, and in order to quickly respond to user requests and maintain stable operation of the system, two developed systems are deployed into the service system developed in the chapter, so that the service quality of the system can be effectively improved, system breakdown caused by sharply increased access requests is avoided, and the cost is reduced.
The test platform uses 12 virtual machines to build three clusters, manages the clusters through a federal mechanism, and each cluster comprises a master node and three node working nodes. In order to process a user's request, system services of tasks executed by the user are firstly published to the platform, most of the services are developed by java, the services are compiled and packaged into jar, information such as service names, function descriptions, jar paths and the like is provided when the user adds, and the services are created by the platform and downloaded for mirror image service creation. In the Kubernetes cluster, two components, namely a kube-scheduler and a kube-controller-manager, are used for managing functions of task scheduling, pod automatic expansion and the like, and all have custom interfaces, so that a developer can modify rules of a scheduler and a controller according to project requirements by using different development languages. And optimizing task scheduling and load balancing in the cluster, and monitoring the cluster by using KubeSphere.
Information summaries of the Host cluster and the Member cluster, such as information of names, the number of nodes in the cluster, used k8s version and the like, are displayed in cluster management, and are shown in fig. 3.
The usage of resources in the cluster, such as CPU usage, memory usage, container group usage, and local storage usage, can be checked through the cluster overview, and the specific monitoring view is shown in fig. 4.
The Pod information for all running in the platform is shown in fig. 5. In the micro-service task scheduling process, a scheduling algorithm can preferentially select an existing container or a container cache queue to activate a container in a sleep state to execute a corresponding task, and when a new Pod needs to be created, a scheduler can create the new Pod and select the most suitable node for deployment through the algorithm. During deployment, the nodes are selected according to comprehensive evaluation of various information of each node, wherein the node information comprises CPU utilization rate, CPU load, memory utilization rate, disk use condition, network state and the like, and the specific information is shown in fig. 6. Tasks are performed by different Pod within each node, each node operating as shown in fig. 7.
In the scheduling process, the platform can monitor the request state of the system API by the scheduler and the user, and check the scheduling condition and the request pressure at different moments to dynamically adjust the scheduling strategy. The scheduler scheduling case is shown in fig. 8, and the user request status monitoring for the system API is shown in fig. 9.
The self-developed intelligent building system of the subject group is deployed in two clusters of m1 and m2, wherein the m1 cluster uses a scheduler realized by the method provided by the invention, and the m2 cluster uses a default scheduler, namely, a threshold-based elastic telescoping strategy. Concurrent thread groups in the pressure test tool Jmeter are used to simulate the user request load, and the effectiveness of the method is verified. Because the load prediction method predicts the future CPU use condition, the HPA strategy in the cluster m2 using the default scheduler adopts the average CPU load as the expansion load index, the minimum Pod number minRepicas of HPA is set as 2, the maximum Pod number maxRepicas is set as 7, and the cluster can be elastically expanded and contracted within the Pod number range of 2-7.
In HPA, the default cooling period for expansion is 3 minutes, the default cooling period for contraction is 5 minutes, and the tuning kube-controller-manager component start-up parameter is set to uniform cooling time of 5 minutes. The test time was 75 minutes, and first, the load was continuously increased using the Jmeter thread group to peak load at time 5 and 7, with the maximum load at time 7. And then the load is reduced, the load reaches a low peak at the 10 th time point, then the load is continuously increased, the load reaches a high peak at the 13 th time point, and finally the load is continuously reduced. The results of the two sets of elastic stretch verifications are shown in FIG. 9.
As can be seen from the comparison of the results, as the load is continuously increased, the average response time of the micro-service requests to the users is obviously improved by 12.69% compared with that of the m2 cluster by adding or deleting the Pod copies in the m1 cluster in advance. Hysteresis of the HPA policy is avoided, resulting in an increase in average response time and waste of resources. Therefore, the method can better meet the service performance requirements of the user.
It should be noted that the embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those of ordinary skill in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The device of the present invention and its modules may be implemented by hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., as well as software executed by various types of processors, or by a combination of the above hardware circuitry and software, such as firmware.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.

Claims (9)

1. The micro-service workflow scheduling elastic method based on the container cloud platform is characterized by comprising the following steps of:
firstly, dividing a workflow into a delay-sensitive type flow workflow DSSW and a resource-intensive batch processing workflow RIBW according to micro-service workflow meta-information; calculating the sub-expiration date of each task in the workflow, and adding the sub-expiration date into a corresponding task pool;
secondly, adding ready tasks in the DSSW and RIBW workflow task pools into a ready queue and sequencing; different allocation strategies are adopted to allocate or create containers for the DSSW and the RIBW workflows, and a task scheduling scheme is obtained;
and finally, determining the mapping relation between the container and the VM by adopting a new container instance deployment algorithm of multi-objective optimization to obtain a deployment scheme.
2. The micro-service workflow elastic scheduling method based on the container cloud platform as claimed in claim 1, wherein the micro-service workflow elastic scheduling method based on the container cloud platform comprises the following steps:
step one, analyzing a user request into a micro-service workflow, and classifying the workflow into a delay-sensitive streaming workflow DSSW and a data-intensive batch workflow RIBW according to the expiration date submitted by the user; respectively adding tasks in the two types of workflow sets to corresponding task pools, wherein ready tasks are added to ready queues; assigning task in the workflow to a sub-deadline;
step two, non-descending order sorting is carried out on tasks in the DSSW according to scheduling urgency; non-descending order sorting is carried out on tasks in the RIBW according to the earliest starting time of the tasks;
step three, an instance minimizing the task processing cost is allocated or created for the DSSW, the cost of executing all containers is calculated for the RIBW, and a proper container instance is selected for the task with the minimum cost and idle time;
step four, updating the state of the task in the ready queue in time after the task is executed, adding the task in the task pool into the ready queue after the task is ready, and executing step S103 again on the task in the ready queue until the task pool and the ready queue are empty;
and fifthly, determining the mapping relation between the container and the VM, and deploying the new container to be created on the virtual machine.
3. The method for flexible scheduling of micro-service workflow based on container cloud platform as recited in claim 2, wherein said ordering tasks in DSSW workflow and RIBW workflow task pools comprises:
for DSSW workflow: calculating the scheduling urgency of each ready task in a task pool according to the number of unassigned tasks on a critical path from a current task to an exiting task in the task pool and the expected completion time of the tasks, and sequencing the tasks according to the scheduling urgency;
for the RIBW workflow: ordering is performed according to the earliest starting time of the current task in the task pool.
4. The method for flexible scheduling of micro-service workflow based on container cloud platform as recited in claim 2, wherein said assigning or creating an instance for DSSW that minimizes task processing costs, calculating the cost of all container execution for RIBW, and selecting an appropriate container instance for task at minimum cost and idle time comprises:
for DSSW: the service instance is selected according to the following three policies. 1) If there are instances in the free container set that can complete the task on time, the least costly instance is selected. 2) If a new instance is needed, the minimum speed minSpeed of the new container instance meeting the demand is calculated, and whether the minSpeed is greater than the speed of the highest ranked VM or negative is compared, if so, the task is assigned to the existing or created instance with the highest configuration, and the task is assigned to the instance with the smaller earliest end time EFT. If not, an instance is created with a speed slightly greater than minSpeed, typically configured as an integer multiple of discrete units, e.g., 500m for CPU and 512Mi for memory, and tasks are assigned to the instance. 3) If a new instance is created in a certain virtual machine and needs to be mirrored, firstly pulling the mirror and starting a container to execute tasks;
for RIBW: the cost of executing a task on each running container on the VM is calculated, and an appropriate container instance is selected for the task at minimal cost and idle time.
5. The method for flexible scheduling of micro-service workflow based on container cloud platform as claimed in claim 2, wherein determining the mapping relationship of the container to the VM comprises:
sorting new containers to be deployed according to the sequence of the required resource amount from large to small, traversing all VM (virtual machine) storing the required mirror image, and selecting the VM with the minimum gap between the residual resources of the virtual machine and the resources required by the containers through an optimal matching algorithm;
if all new containers cannot be deployed by the existing VM resources, renting the new VM to accommodate the new containers, carrying out boxing modeling on the deployment of the remaining containers, and obtaining a deployment scheme by setting multi-resource guarantees as main targets of a mapping scheme, balancing load balancing and dependency perception;
the multi-resource guarantee is that the resource requirement of a container in the same VM should not exceed the capacity thereof; the container dependency is perceived as placing containers with dependencies as far as possible on one VM; the multi-objective tradeoff is to maximize the number of containers that have a dependency on one virtual machine while keeping the value of maximum resource utilization to a minimum.
If all container instances cannot be deployed on the existing VM instance, renting X new virtual machines to enable the total resource amount to be larger than resources required by the remaining containers, wherein the new virtual machines can be of multiple different types or the same type in a set, and then calling the deployment method again until all container instances to be deployed are deployed to obtain a telescopic scheme.
6. A container cloud platform-based micro-service workflow elastic scheduling system for implementing the container cloud platform-based micro-service workflow elastic scheduling method of any one of claims 1 to 5, wherein the container cloud platform-based micro-service workflow elastic scheduling system comprises:
the workflow initialization module classifies the workflow into a delay-sensitive type streaming workflow DSSW and a data-intensive batch processing workflow RIBW according to the deadline submitted by a user; adding different types of tasks into task pools which are respectively realized by using Kafka message queues, and if the tasks are ready, consuming the message queues of Topic which are arranged in the task pools and adding the message queues into ready queues; assigning a sub-deadline according to the expected execution time PET of the task in the workflow, namely the execution time of the task on the highest-ranking container instance and the rank of the task;
the task ordering module is used for ordering tasks in the delay-sensitive streaming workflow DSSW and data-intensive batch processing workflow RIBW ready queues;
the task allocation module is used for consuming the information from the Topic set by the ready queue, adopting different strategies for different tasks, allocating or creating an instance for minimizing the task processing cost for the DSSW and calculating the executing cost of all containers for the RIBW, and selecting a proper container instance for the task according to the minimum cost and idle time;
and the online adjustment module is used for updating the meta information of the task in the ready queue and the ready time of the subsequent task and the container instance in time after the task execution is completed.
And the container deployment module is used for determining the mapping relation between the container and the VM and deploying the created new container to the virtual machine.
7. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the container cloud platform based micro-service workflow resilient scheduling method of any one of claims 1-5.
8. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the container cloud platform based micro-service workflow resilient scheduling method of any one of claims 1-5.
9. An information data processing terminal, wherein the information data processing terminal is used for realizing the micro-service workflow elastic scheduling system based on a container cloud platform as claimed in claim 6.
CN202310199403.2A 2023-03-04 2023-03-04 Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform Pending CN116302519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310199403.2A CN116302519A (en) 2023-03-04 2023-03-04 Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310199403.2A CN116302519A (en) 2023-03-04 2023-03-04 Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform

Publications (1)

Publication Number Publication Date
CN116302519A true CN116302519A (en) 2023-06-23

Family

ID=86784607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310199403.2A Pending CN116302519A (en) 2023-03-04 2023-03-04 Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform

Country Status (1)

Country Link
CN (1) CN116302519A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056061A (en) * 2023-10-13 2023-11-14 浙江远算科技有限公司 Cross-supercomputer task scheduling method and system based on container distribution mechanism

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056061A (en) * 2023-10-13 2023-11-14 浙江远算科技有限公司 Cross-supercomputer task scheduling method and system based on container distribution mechanism
CN117056061B (en) * 2023-10-13 2024-01-09 浙江远算科技有限公司 Cross-supercomputer task scheduling method and system based on container distribution mechanism

Similar Documents

Publication Publication Date Title
Liu et al. Imprecise computations
US9021490B2 (en) Optimizing allocation of computer resources by tracking job status and resource availability profiles
US8869164B2 (en) Scheduling a parallel job in a system of virtual containers
US9141433B2 (en) Automated cloud workload management in a map-reduce environment
US20070016907A1 (en) Method, system and computer program for automatic provisioning of resources to scheduled jobs
JP2005534116A (en) A method for dynamically allocating and managing resources in a multi-consumer computer system.
JP2005182796A (en) Apparatus, system and method for autonomic control of grid system resource
CN111861412B (en) Completion time optimization-oriented scientific workflow scheduling method and system
CN109857535B (en) Spark JDBC-oriented task priority control implementation method and device
CN111367644A (en) Task scheduling method and device for heterogeneous fusion system
Gu et al. Dynamic budget management and budget reclamation for mixed-criticality systems
CN116302519A (en) Micro-service workflow elastic scheduling method, system and equipment based on container cloud platform
Mascitti et al. Dynamic partitioned scheduling of real-time dag tasks on arm big. little architectures
Chakravarthi et al. Workflow Scheduling Techniques and Algorithms in IaaS Cloud: A Survey.
Zhang et al. Scheduling best-effort and real-time pipelined applications on time-shared clusters
Partheeban et al. Versatile provisioning and workflow scheduling in WaaS under cost and deadline constraints for cloud computing
Singh et al. Performance impact of resource provisioning on workflows
Binns A robust high-performance time partitioning algorithm: the digital engine operating system (DEOS) approach
CN116244073A (en) Resource-aware task allocation method for hybrid key partition real-time operating system
Capannini et al. A job scheduling framework for large computing farms
Prasad et al. Resource allocation in cloud computing
Glaß „Plan Based Thread Scheduling on HPC Nodes “
Taheri et al. A cloud broker for executing deadline-constrained periodic scientific workflows
Kim et al. Dynamic Transaction Management for System Level Quality-of-Service in Mobile APs
Boeres et al. Efficient hierarchical self-scheduling for MPI applications executing in computational Grids

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination