CN110737485A - workflow configuration system and method based on cloud architecture - Google Patents
workflow configuration system and method based on cloud architecture Download PDFInfo
- Publication number
- CN110737485A CN110737485A CN201910930459.4A CN201910930459A CN110737485A CN 110737485 A CN110737485 A CN 110737485A CN 201910930459 A CN201910930459 A CN 201910930459A CN 110737485 A CN110737485 A CN 110737485A
- Authority
- CN
- China
- Prior art keywords
- workflow
- queue
- instance
- engine
- manager
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44521—Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
- G06F9/44526—Plug-ins; Add-ons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides workflow configuration systems and methods based on cloud architecture, which are characterized in that a workflow parser, a workflow manager and a job scheduler are installed in a Hadoop cloud computing cluster in an existing cloud workflow system which is deployed in a large scale as integral pluggable plug-ins, so that the cluster also has the capability of identifying and intelligently scheduling workflow instance jobs, thereby breaking through the barrier between a workflow engine and an underlying cloud computing cluster, and simultaneously adding a workflow definition file generating component in the system to an external workflow engine, so that the workflow system in the existing production environment has partial customization functions and performance optimization of the system.
Description
Technical Field
The invention relates to the field of workflows, in particular to workflow configuration systems and methods based on a cloud architecture.
Background
With the unprecedented development of cloud computing technology and the expansion of computing requirements of mass data, a workflow system is increasingly regarded as a link and a bridge between user services and cloud computing resources, the existing workflow management system is mostly matched with a Hadoop distributed computing platform and other distributed computing platforms in a third-party independent system mode to operate, the actual operation of workflows is completely managed in the distributed computing platforms, so that the management of workflows is simplified by the scheme, the workflow management is not optimized, the workflow is not triggered in a single mode, and the workflow life cycle control cannot be carried out, a large amount of computing resources and storage resources are wasted, and the real-time optimal allocation of workflow tasks cannot be realized according to different node resource amounts of clusters, networks and other heterogeneous factors.
Disclosure of Invention
In view of this, the invention provides workflow configuration systems and methods based on a cloud architecture, which can break through the barrier between a workflow engine and an underlying cloud computing cluster and reasonably allocate resources.
The invention provides workflow configuration systems based on cloud architecture, which comprises a user platform, a workflow construction tool, a workflow engine and a Hadoop computing cluster, wherein the Hadoop computing cluster comprises a workflow parser, a workflow manager and a job scheduler;
the user platform is used for calling the workflow construction tool;
the workflow building tool is used for building a workflow example, and after the workflow example is built, the corresponding job execution file, the corresponding data file, the corresponding auxiliary cache file and the corresponding generated iPDL file are submitted to a workflow engine;
the workflow engine submits workflow operation to the workflow resolver through the configuration file attached to the workflow instance;
the workflow analyzer identifies and analyzes workflow operation and submits the information of the obtained workflow instance to the workflow manager;
the workflow manager registers the workflow instance to a workflow instance queue;
the job scheduler manages the resource allocation work, and the -th workflow manager feeds back the execution of the workflow instance to the workflow engine through the callback URL.
Based on the above technical solutions, preferably, the workflow parser, the th workflow manager and the job scheduler are all independent pluggable modules.
On the basis of the above technical solution, preferably, the workflow engine includes a workflow monitor, a second workflow manager and a workflow database;
the workflow monitor preliminarily checks the triggering condition of the workflow instance, the workflow instance meeting the execution constraint enters the execution flow, the second workflow manager generates the workflow job and the iPDL description file and submits the workflow job to the workflow parser, and the workflow engine submits the iPDL description file of the workflow instance to which the workflow engine belongs and submits the workflow job to the workflow database.
A workflow configuration method based on cloud architecture, comprising the following steps:
s1, building the system architecture of claim 1;
s2, after the user builds the basic DAG structure of the workflow instance, the user platform submits the job execution file, the data file, the attached cache file and the generated iPDL file of the workflow instance to the workflow engine;
s3, the workflow engine preliminarily checks the trigger condition of the workflow instance, the workflow instance meeting the execution constraint enters the execution flow, and the workflow engine submits the workflow job meeting the execution dependency one by one according to the flow structure;
s4, when the workflow engine submits the workflow operation to the Hadoop calculation cluster, the iPDL description file of the workflow instance to which the workflow engine belongs is submitted to the distributed cache directory of the temporary operation directory in the workflow database;
s5, the workflow parser judges the type of the operation and finds the iPDL description file, when the iPDL description file is found, workflow operations are considered and the iPDL description file is parsed, and the information of the workflow instance is obtained and updated to the workflow manager;
s6, the workflow manager checks the trigger condition configuration of the workflow instance, if the trigger configuration exists, the workflow manager enters a sleep queue and registers a corresponding trigger event to the cluster monitor module, and when the trigger condition is satisfied, the workflow manager enters an execution queue and is initialized to be handed to the workflow scheduler to manage resource allocation work.
Based on the above technical solution, it is preferable that the trigger conditions controlled by the workflow engine in S3 include a time condition trigger and a cluster data availability trigger.
Based on the above technical solution, preferably, the step of the workflow scheduler managing resource allocation task in S6 specifically includes the following steps:
s201, setting a scheduling model consisting of a priority queue, a task pool, a task queue and a scheduler;
s202, defining the workflow as a high-preference preemption queue, a high-priority non-preemption queue and a basic priority queue, wherein the high-preference preemption queue has the highest priority, and when the high-preference preemption queue requests resources, the system stops the current task and puts the current task back to a task pool to preferentially execute the high-preference preemption queue; the high-priority non-preemptive queue has a priority response right and cannot preempt the task resources of the basic priority queue;
s203, only tasks are executed for times on each computing resource, and the scheduled tasks are put into a task queue corresponding to each computing resource according to the sequence of using the resources and wait until the resources are available;
s204, the scheduler updates the SD factor of each DAG, the tasks are sorted from high to low according to the priority of the tasks, and the tasks with the same priority are sorted from low to high according to the SD factor.
Compared with the prior art, the workflow configuration system and method based on the cloud architecture have the following beneficial effects:
(1) the workflow analyzer, the workflow manager and the job scheduler are installed in a Hadoop cloud computing cluster in an existing cloud workflow system deployed in a large scale as integral pluggable plug-ins, so that the cluster has the capability of identifying and intelligently scheduling workflow instance jobs, a barrier between a workflow engine and an underlying cloud computing cluster is opened, meanwhile, a workflow definition file generating component in the system is additionally installed on an external workflow engine, and the workflow system in the existing production environment can have partial customization functions and performance optimization of the system;
(2) by setting the task flows to three levels and setting the priority for each level, the order in which the task flows are scheduled by the scheduler can be used to make reasonable use of resources.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a block diagram of cloud-based workflow configuration systems according to the present invention;
FIG. 2 is a flowchart illustrating workflow configuration methods based on cloud architecture according to the present invention;
fig. 3 is a flowchart illustrating the work of managing resource allocation by a workflow scheduler in workflow configuration methods based on cloud architecture according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely in the following description of the embodiments of the present invention, and it is obvious that the described embodiments are only partial embodiments of of the present invention, rather than all embodiments.
As shown in FIG. 1, kinds of cloud architecture-based workflow configuration systems of the invention comprise a user platform, a workflow construction tool, a workflow engine and a Hadoop computing cluster.
And the user platform is used as an interactive bridge between the user and the workflow engine, the user calls the workflow construction tool through a web page or a desktop client of the user platform, and the tool in the workflow construction tool is used for creating a workflow instance.
And the workflow building tool is used for assembling visual DAG structure diagrams and corresponding workflow definition language files, and after the workflow instance is built, corresponding job execution files, data files, auxiliary cache files and generated iPDL files are submitted to a workflow engine.
In the embodiment, the workflow engine comprises a workflow monitor, a second workflow manager and a workflow database, wherein the workflow monitor preliminarily checks the trigger condition of the workflow instance, the workflow instance meeting the execution constraint enters the execution flow, the second workflow manager generates the workflow job and the iPDL description file and submits the workflow job and the iPDL description file of the workflow instance to which the workflow engine belongs is submitted to the workflow database when the workflow engine submits the workflow job.
In the embodiment, the Hadoop calculation cluster comprises a workflow resolver, a workflow manager and a job scheduler, wherein the workflow resolver, an th workflow manager and the job scheduler are all used as Hadoop pluggable modules and built in the Hadoop cluster, so that the Hadoop has the capability of identifying and understanding the workflow instances, the barrier between the workflow engine and the underlying cloud calculation cluster is broken, specifically, the workflow resolver identifies and analyzes the workflow jobs, submits the obtained information of the workflow instances to a workflow manager, and the th workflow manager registers the workflow instances to a workflow instance queue, manages the resource allocation work of the workflow scheduler, and simultaneously feeds back the execution condition of the workflow instances to a workflow callback engine through URL (uniform resource locator).
Example II,
On the basis of the embodiment , the embodiment provides methods for configuring a workflow based on a cloud architecture, including the following steps:
s1, building the system architecture of claim 1;
s2, after the user builds the basic DAG structure of the workflow instance, the user platform submits the job execution file, the data file, the attached cache file and the generated iPDL file of the workflow instance to the workflow engine;
s3, the workflow engine preliminarily checks the trigger condition of the workflow instance, the workflow instance meeting the execution constraint enters the execution flow, and the workflow engine submits the workflow job meeting the execution dependency one by one according to the flow structure;
preferably, the trigger conditions controlled by the workflow engine include time condition triggers and cluster data availability triggers.
S4, when the workflow engine submits the workflow operation to the Hadoop calculation cluster, the iPDL description file of the workflow instance to which the workflow engine belongs is submitted to the distributed cache directory of the temporary operation directory in the workflow database;
s5, the workflow parser judges the type of the operation and finds the iPDL description file, when the iPDL description file is found, workflow operations are considered and the iPDL description file is parsed, and the information of the workflow instance is obtained and updated to the workflow manager;
s6, the workflow manager checks the trigger condition configuration of the workflow instance, if the trigger configuration exists, the workflow manager enters a sleep queue and registers a corresponding trigger event to the cluster monitor module, and when the trigger condition is satisfied, the workflow manager enters an execution queue and is initialized to be handed to the workflow scheduler to manage resource allocation work.
, preferably, the workflow scheduler management resource allocation task specifically includes the following steps:
s201, setting a scheduling model consisting of a priority queue, a task pool, a task queue and a scheduler;
s202, defining the workflow as a high-preference preemption queue, a high-priority non-preemption queue and a basic priority queue, wherein the high-preference preemption queue has the highest priority, and when the high-preference preemption queue requests resources, the system stops the current task and puts the current task back to a task pool to preferentially execute the high-preference preemption queue; the high-priority non-preemptive queue has a priority response right and cannot preempt the task resources of the basic priority queue;
s203, only tasks are executed for times on each computing resource, and the scheduled tasks are put into a task queue corresponding to each computing resource according to the sequence of using the resources and wait until the resources are available;
s204, the scheduler updates the SD factor of each DAG, the tasks are sorted from high to low according to the priority of the tasks, and the tasks with the same priority are sorted from low to high according to the SD factor.
When the task finishes execution and releases resources, the scheduler enters a task pool directly after the task, updates the SD factor of each DAG according to the current time, and then schedules the tasks in the task pool according to a scheduling strategy, and when new jobs arrive, the scheduler updates the operation time of the system according to the current time and performs operation scheduling again in cases.
, the scheduling policies of the scheduler preferably include job scheduling policies for the same priority DAG and job scheduling policies for different priority DAGs.
The job scheduling policy of the DAG with the same priority level is as follows: introducing a heuristic information factor Delaytime to represent the progress of the whole operation through tasks which are not executed in a task queue, and ensuring the fairness of resource preemption and resource use; the job scheduling policies for different priority DAGs are: the scheduler finishes scheduling the tasks of the high-priority jobs, and the tasks of the low-priority jobs can block the whole resource queue by using the tasks of the backfill algorithm.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (6)
1, work flow configuration systems based on cloud architecture, which comprise a user platform, a work flow construction tool, a work flow engine and a Hadoop computing cluster, and are characterized in that the Hadoop computing cluster comprises a work flow analyzer, a th work flow manager and a job scheduler;
the user platform is used for calling the workflow construction tool;
the workflow building tool is used for building a workflow example, and after the workflow example is built, the corresponding job execution file, the corresponding data file, the corresponding auxiliary cache file and the corresponding generated iPDL file are submitted to a workflow engine;
the workflow engine submits workflow operation to the workflow resolver through the configuration file attached to the workflow instance;
the workflow analyzer identifies and analyzes workflow operation and submits the information of the obtained workflow instance to the workflow manager;
the workflow manager registers the workflow instance to a workflow instance queue;
the job scheduler manages the resource allocation work, and the -th workflow manager feeds back the execution of the workflow instance to the workflow engine through the callback URL.
2. The cloud-based workflow configuration system of claim 1, wherein said workflow parser, workflow manager and job scheduler are all independent pluggable modules.
3. The cloud architecture-based workflow configuration system of claim 1, wherein said workflow engine comprises a workflow monitor, a second workflow manager and a workflow database;
the workflow monitor preliminarily checks the trigger condition of the workflow instance, and the workflow instance meeting the execution constraint enters an execution flow;
and the second workflow manager generates a workflow job and an iPDL description file and submits the workflow job and the iPDL description file to the workflow parser, and when the workflow engine submits the workflow job, the second workflow manager submits the iPDL description file of the workflow instance to which the workflow engine belongs to the workflow parser to the workflow database.
4, A workflow configuration method based on cloud architecture, which is characterized by comprising the following steps:
s1, building the system architecture of claim 1;
s2, after the user builds the basic DAG structure of the workflow instance, the user platform submits the job execution file, the data file, the attached cache file and the generated iPDL file of the workflow instance to the workflow engine;
s3, the workflow engine preliminarily checks the trigger condition of the workflow instance, the workflow instance meeting the execution constraint enters the execution flow, and the workflow engine submits the workflow job meeting the execution dependency one by one according to the flow structure;
s4, when the workflow engine submits the workflow operation to the Hadoop calculation cluster, the iPDL description file of the workflow instance to which the workflow engine belongs is submitted to the distributed cache directory of the temporary operation directory in the workflow database;
s5, the workflow parser judges the type of the operation and finds the iPDL description file, when the iPDL description file is found, workflow operations are considered and the iPDL description file is parsed, and the information of the workflow instance is obtained and updated to the workflow manager;
s6, workflow manager checks the trigger condition configuration of the workflow instance, if the trigger configuration exists, the workflow manager enters a sleep queue and registers a corresponding trigger event to the cluster monitor module, and enters an execution queue after the trigger condition is met, and the workflow manager is initialized and handed to the workflow scheduler to manage resource allocation work.
5. The cloud architecture-based workflow configuration method of claim 4, wherein said workflow engine controlled trigger conditions in S3 include time condition trigger and cluster data availability trigger.
6. The workflow configuration method based on cloud architecture as claimed in claim 4, wherein the workflow scheduler managing resource allocation job in S6 includes the following steps:
s201, setting a scheduling model consisting of a priority queue, a task pool, a task queue and a scheduler;
s202, defining the workflow as a high-preference preemption queue, a high-priority non-preemption queue and a basic priority queue, wherein the high-preference preemption queue has the highest priority, and when the high-preference preemption queue requests resources, the system stops the current task and puts the current task back to a task pool to preferentially execute the high-preference preemption queue; the high-priority non-preemptive queue has a priority response right and cannot preempt the task resources of the basic priority queue;
s203, only tasks are executed for times on each computing resource, and the scheduled tasks are put into a task queue corresponding to each computing resource according to the sequence of using the resources and wait until the resources are available;
s204, the scheduler updates the SD factor of each DAG, the tasks are sorted from high to low according to the priority of the tasks, and the tasks with the same priority are sorted from low to high according to the SD factor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910930459.4A CN110737485A (en) | 2019-09-29 | 2019-09-29 | workflow configuration system and method based on cloud architecture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910930459.4A CN110737485A (en) | 2019-09-29 | 2019-09-29 | workflow configuration system and method based on cloud architecture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110737485A true CN110737485A (en) | 2020-01-31 |
Family
ID=69269750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910930459.4A Pending CN110737485A (en) | 2019-09-29 | 2019-09-29 | workflow configuration system and method based on cloud architecture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110737485A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414601A (en) * | 2020-03-27 | 2020-07-14 | 中国人民解放军国防科技大学 | Continuous identity authentication method, system and medium for kylin mobile operating system |
CN112698931A (en) * | 2021-01-12 | 2021-04-23 | 北京理工大学 | Distributed scheduling system for cloud workflow |
CN113158113A (en) * | 2021-05-17 | 2021-07-23 | 上海交通大学 | Multi-user cloud access method and management system for biological information analysis workflow |
CN114595580A (en) * | 2022-03-09 | 2022-06-07 | 北京航空航天大学 | Complex workflow engine method meeting optimization design of large flexible blade |
CN116737331A (en) * | 2023-03-27 | 2023-09-12 | 联洋国融(北京)科技有限公司 | Intelligent task flow arrangement method and platform |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109861850A (en) * | 2019-01-11 | 2019-06-07 | 中山大学 | A method of the stateless cloud workflow load balance scheduling based on SLA |
-
2019
- 2019-09-29 CN CN201910930459.4A patent/CN110737485A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109861850A (en) * | 2019-01-11 | 2019-06-07 | 中山大学 | A method of the stateless cloud workflow load balance scheduling based on SLA |
Non-Patent Citations (2)
Title |
---|
刘潋: ""基于 Hadoop 的工作流系统设计与实现"", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
孙月等: ""云计算中一种多DAG工作流可抢占式调度策略"", 《计算机科学》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414601A (en) * | 2020-03-27 | 2020-07-14 | 中国人民解放军国防科技大学 | Continuous identity authentication method, system and medium for kylin mobile operating system |
CN111414601B (en) * | 2020-03-27 | 2023-10-03 | 中国人民解放军国防科技大学 | Continuous identity authentication method, system and medium for kylin mobile operation system |
CN112698931A (en) * | 2021-01-12 | 2021-04-23 | 北京理工大学 | Distributed scheduling system for cloud workflow |
CN112698931B (en) * | 2021-01-12 | 2022-11-11 | 北京理工大学 | Distributed scheduling system for cloud workflow |
CN113158113A (en) * | 2021-05-17 | 2021-07-23 | 上海交通大学 | Multi-user cloud access method and management system for biological information analysis workflow |
CN114595580A (en) * | 2022-03-09 | 2022-06-07 | 北京航空航天大学 | Complex workflow engine method meeting optimization design of large flexible blade |
CN114595580B (en) * | 2022-03-09 | 2024-05-28 | 北京航空航天大学 | Complex workflow engine method meeting optimization design of large flexible blade |
CN116737331A (en) * | 2023-03-27 | 2023-09-12 | 联洋国融(北京)科技有限公司 | Intelligent task flow arrangement method and platform |
CN116737331B (en) * | 2023-03-27 | 2024-08-23 | 联洋国融(北京)科技有限公司 | Intelligent task flow arrangement method and platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110737485A (en) | workflow configuration system and method based on cloud architecture | |
US8332862B2 (en) | Scheduling ready tasks by generating network flow graph using information receive from root task having affinities between ready task and computers for execution | |
US8185903B2 (en) | Managing system resources | |
US8869165B2 (en) | Integrating flow orchestration and scheduling of jobs and data activities for a batch of workflows over multiple domains subject to constraints | |
US20140201756A1 (en) | Adaptive resource usage limits for workload management | |
US20110154350A1 (en) | Automated cloud workload management in a map-reduce environment | |
US8250131B1 (en) | Method and apparatus for managing a distributed computing environment | |
US20030149717A1 (en) | Batch processing job streams using and/or precedence logic | |
US11010195B2 (en) | K-tier architecture scheduling | |
US20100121904A1 (en) | Resource reservations in a multiprocessor computing environment | |
Guo et al. | Delay-optimal scheduling of VMs in a queueing cloud computing system with heterogeneous workloads | |
Lee et al. | Resource scheduling in dependable integrated modular avionics | |
Medina et al. | Scheduling multi-periodic mixed-criticality dags on multi-core architectures | |
Bai et al. | ASDYS: Dynamic scheduling using active strategies for multifunctional mixed-criticality cyber–physical systems | |
Klusáček et al. | Efficient grid scheduling through the incremental schedule‐based approach | |
Binns | A robust high-performance time partitioning algorithm: the digital engine operating system (DEOS) approach | |
Ru et al. | An efficient deadline constrained and data locality aware dynamic scheduling framework for multitenancy clouds | |
CN111444001A (en) | Cloud platform task scheduling method and system | |
Gutiérrez García et al. | Prioritizing remote procedure calls in Ada distributed systems | |
Lu et al. | Dynamic virtual resource management in clouds coping with traffic burst | |
Liu et al. | Schedule dynamic multiple parallel jobs with precedence-constrained tasks on heterogeneous distributed computing systems | |
Gu et al. | Synthesis of real-time implementations from component-based software models | |
Suzuki et al. | Execution Right Delegation Scheduling Algorithm for Multiprocessor | |
Dabhi et al. | Soft computing based intelligent grid architecture | |
CN110865886B (en) | Harmonious perception multiprocessor scheduling method for multi-probabilistic parameter real-time task |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200131 |