CN111240819A - Dispatching task issuing system and method - Google Patents

Dispatching task issuing system and method Download PDF

Info

Publication number
CN111240819A
CN111240819A CN202010024753.1A CN202010024753A CN111240819A CN 111240819 A CN111240819 A CN 111240819A CN 202010024753 A CN202010024753 A CN 202010024753A CN 111240819 A CN111240819 A CN 111240819A
Authority
CN
China
Prior art keywords
scheduling
task
execution
issuing
scheduling task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010024753.1A
Other languages
Chinese (zh)
Inventor
王豪森
简闻
苏鹏
苏哲浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Genersoft Information Technology Co Ltd
Original Assignee
Shandong Inspur Genersoft Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Genersoft Information Technology Co Ltd filed Critical Shandong Inspur Genersoft Information Technology Co Ltd
Priority to CN202010024753.1A priority Critical patent/CN111240819A/en
Publication of CN111240819A publication Critical patent/CN111240819A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a system and a method for releasing a scheduling task, wherein the method comprises the following steps: s1, the scheduling task to be issued is delivered to the corresponding task executor to be responsible for scheduling and executing; s2, allocating hardware scheduling resources required by operation for the scheduling task; s3, associating all the scheduling tasks according to a logical relation to form scheduling jobs; s4, establishing a scheduling strategy for each scheduling task in the scheduling operation; and S5, issuing the scheduling job and starting execution. The issuing method of the invention adopts the design of the task executor, can support various task types and calling modes, shields the internal realization of different languages and different types of programs, only needs to pay attention to how to execute and call the task, and can reissue the scheduling task only by simply adjusting the calling configuration information even if the program code is changed and the settings are reconstructed by adopting other languages.

Description

Dispatching task issuing system and method
Technical Field
The invention relates to the technical field of data processing, in particular to a system and a method for issuing scheduling tasks.
Background
With the continuous development of social informatization and software technology, the stacking software architecture of the traditional independent system cannot meet the constantly changing business requirements and the increasingly large volume structure and high maintenance cost, and the whole body is usually driven by traction. As a result, more and more software is moving from monolithic applications to micro-service architectures and large data platform technology frameworks, and this transition brings advantages and convenience while the problem is natural. Because the development languages are various and the backgrounds of developers are greatly different, a plurality of different types of programs (tasks) are generated and run on a large data platform, how to realize the scheduling of various tasks according to requirements and realize the communication among the tasks is realized, the key of the problem that the original service codes are transformed due to scheduling logic and the release and running of the scheduling tasks are realized as soon as possible is reduced, and the problem of how to realize the quick development, the simple operation and the instant use after opening the box is placed at the top.
The traditional method is to modify the original code logic, and call and execute the program by using the same API interface. Program developers need to consider the implementation logic of the scheduling tasks, compatibility problems caused by the fact that the scheduling tasks are possibly deployed in different operating systems, specific scheduling strategies of the tasks and other aspects, so that the development workload is remarkably increased, the transformation time period is prolonged, the input resources and the cost are increased, the efficiency is low, the maintenance is difficult, and quick response to the requirements cannot be achieved.
With the increasing number of scheduling tasks issued in the system, how to perform unified management and monitoring on the tasks based on different languages and calling modes is particularly critical to realize cooperative work, intercommunication and exception handling among the tasks.
Disclosure of Invention
The invention aims to provide a distribution system and a method of scheduling tasks capable of being quickly distributed based on configuration, aiming at the defects.
The technical scheme adopted by the invention is as follows:
a method for issuing scheduling tasks comprises the following steps:
s1, the scheduling task to be issued is delivered to the corresponding task executor to be responsible for scheduling and executing;
s2, allocating hardware scheduling resources required by operation for the scheduling task;
s3, associating all the scheduling tasks according to a logical relation to form scheduling jobs;
s4, establishing a scheduling strategy for each scheduling task in the scheduling operation;
and S5, issuing the scheduling job and starting execution.
Specifically, in step S1, the process of inputting the specific execution information of the scheduling task to be issued to the task executor, and inputting the specific execution information of the scheduling task to the task executor includes the following steps:
s11, filling in the name, function and version description information of the scheduling task;
s12, selecting the type of the scheduling task;
s13, setting a scheduling task calling mode;
s14, setting parameter information of a scheduling task;
and S15, selecting a program execution mode of the scheduling task.
Further, in step S12, the present invention includes the types of the scheduling task including a program task and a database task, where the types of the program task include NET, Java, Python, Golang, C + +, Shell, and EXE, and the database task includes SQL statements, stored procedures, and database jobs.
Further, in step S13, the present invention is that the scheduling task is called in a manner that includes a program reflection call and a Web service access, where the Web service access includes a SOAP type service and a REST type service.
Specifically, in step S2, the process of allocating the hardware scheduling resource required for the scheduling task includes the following steps:
s21, registering and uniformly managing hardware resources for executing scheduling tasks, wherein the hardware resources are single host resources or a resource pool consisting of a plurality of host resources;
s22, setting the bearing upper limit of the hardware resource, when the utilization rate of the hardware resource reaches the set bearing upper limit, the hardware resource does not execute the newly distributed scheduling task, and the distributed scheduling task enters the queue to be executed to wait.
Specifically, in step S3 of the present invention, based on the process engine, a visual interface is used to associate all scheduling tasks according to a front-back execution sequence, and set the dependency relationship, execution sequence, execution adjustment and communication data of each scheduling task in the scheduling job.
Specifically, in step S4 of the present invention, the scheduling policy includes a time policy, a routing policy, a blocking policy, and an exception handling policy; wherein:
the time strategy is used for setting the execution time of each scheduling task;
the routing strategy is used for distributing and scheduling the distribution rule of the hardware resources for each scheduling task;
the blocking strategy is used for processing rules when the hardware resources of the scheduling task do not meet the requirements;
the exception handling strategy is used for handling rules when the task executor has execution exception.
The invention also provides a dispatching task issuing system, which comprises a task executor, dispatching resources, a flow engine, a dispatching strategy set and an issuing unit, wherein:
the task executor is used for storing execution information of corresponding scheduling tasks and driving the scheduling tasks to be executed;
the scheduling resources are used for providing hardware resource support for the task execution work;
the flow engine is used for establishing a logic sequence relation for each scheduling task and combining the scheduling tasks into scheduling operation;
the scheduling policy set is used for defining logic rules for the operation of scheduling tasks;
the issuing unit is used for issuing scheduling jobs and starting operation.
As a further optimization, the execution information of the scheduling task in the present invention includes basic information of the scheduling task, a type of the scheduling task, a calling mode of the scheduling task, parameter information of the scheduling task, and a program execution mode of the scheduling task, where the basic information of the scheduling task includes name, function, and version description information.
As a further optimization, the scheduling policy set of the present invention includes a time policy, a routing policy, a blocking policy, and an exception handling policy; wherein:
the time strategy is used for setting the execution time of each scheduling task;
the routing strategy is used for distributing and scheduling the distribution rule of the hardware resources for each scheduling task;
the blocking strategy is used for processing rules when the hardware resources of the scheduling task do not meet the requirements;
the exception handling strategy is used for handling rules when the task executor has execution exception.
The invention has the following advantages:
1. the issuing method of the invention adopts the design of the task executor, can support various task types and calling modes, shields the internal realization of different languages and different types of programs, only needs to pay attention to how to execute and call the task, and can reissue the scheduling task only by simply adjusting the calling configuration information even if the program code is changed and the settings adopt other languages for reconstruction;
2. the release method of the invention distinguishes the specific execution content of the tasks, the correlation dependence among the tasks, the execution carrier resources of the tasks and the setting of the scheduling logic of the tasks, and embodies the modularized design idea and the decoupling of complex functions and service logic;
3. the release method of the invention separates the original complex release flow of the scheduling task by modularizing each configuration link when releasing the scheduling task, and delivers different stages to program developers, hardware managers and maintenance personnel for charge, and only needs to concentrate on the relevant fields of the work of the user, so that the user can take part in the work, and unnecessary resource investment is reduced;
4. the release system can easily set the contents of dependency relationship, execution sequence, execution conditions, communication data and the like among tasks through the visual interface configuration based on the process engine. The collocation combination and the cooperative work of a plurality of scheduling tasks are facilitated, so that complex service scenes and work contents are supported;
5. the release system of the invention provides rich scheduling strategy configuration, meets the requirements of different scheduling scenes, improves the flexibility of scheduling logic, reduces the configuration difficulty, and can be adjusted at any time due to service change.
6. Scheduling tasks issued by the issuing system are uniformly managed and monitored, and operation monitoring, statistical reports, log recording and abnormal early warning are provided through a visual interface;
7. the invention is suitable for big data and micro-service architecture programs based on the characteristics, is convenient for quickly releasing original codes into scheduling tasks, saves a large amount of development cost, test cost and implementation cost, greatly improves the working efficiency and obviously reduces the working difficulty.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
The invention is further described below with reference to the accompanying drawings:
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of the system of the present invention.
Detailed Description
The present invention is further described in the following with reference to the drawings and the specific embodiments so that those skilled in the art can better understand the present invention and can implement the present invention, but the embodiments are not to be construed as limiting the present invention, and the embodiments and the technical features of the embodiments can be combined with each other without conflict.
It is to be understood that the terms first, second, and the like in the description of the embodiments of the invention are used for distinguishing between the descriptions and not necessarily for describing a sequential or chronological order. The "plurality" in the embodiment of the present invention means two or more.
The term "and/or" in the embodiment of the present invention is only an association relationship describing an associated object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, B exists alone, and A and B exist at the same time. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
As shown in fig. 2, the present invention provides an embodiment of a distribution system for scheduling tasks, where the distribution system includes a task executor, a scheduling resource, a process engine, a scheduling policy set, and a distribution unit, where:
the task executor is used for storing execution information of corresponding scheduling tasks and driving the scheduling tasks to be executed; the execution information of the scheduling task comprises basic information of the scheduling task, the type of the scheduling task, the calling mode of the scheduling task, parameter information of the scheduling task and a program execution mode of the scheduling task, wherein the basic information of the scheduling task comprises name, function and version description information.
The scheduling resources are used for providing hardware resource support for the task execution work; the scheduling resource can be a single host resource or a plurality of resource pools formed by a plurality of host resources, and each resource pool can become an independent hardware resource unit to participate in the execution of tasks.
The flow engine is used for establishing a logic sequence relation for each scheduling task and combining the scheduling tasks into scheduling operation;
the scheduling policy set is used for defining logic rules for the operation of scheduling tasks; the scheduling strategy set comprises a time strategy, a routing strategy, a blocking strategy and an exception handling strategy; wherein: the time strategy is used for setting the execution time of each scheduling task; the routing strategy is used for distributing and scheduling the distribution rule of the hardware resources for each scheduling task; the blocking strategy is used for processing rules when the hardware resources of the scheduling task do not meet the requirements; the exception handling strategy is used for handling rules when the task executor has execution exception.
The issuing unit is used for issuing scheduling jobs and starting operation.
As shown in fig. 1, this embodiment further provides a method for issuing a scheduling task, where the method includes the following steps:
s1, the scheduling task to be issued is dispatched and executed by the corresponding task executor, and the specific execution information of the scheduling task to be issued is input into the task executor through the following steps:
s11, filling in the name, function and version description information of the scheduling task;
s12, selecting the type of the scheduling task; the types of the scheduling tasks comprise program tasks and database tasks, the types of the program tasks comprise NET, Java, Python, Golang, C + +, Shell and EXE, and the database tasks comprise SQL statements, stored procedures and database operations.
S13, setting a scheduling task calling mode; the scheduling task calling mode comprises program reflection calling and Web service access, and the Web service access comprises SOAP type service and REST type service.
S14, setting parameter information of a scheduling task; according to the type of the scheduling task of S12 and the calling manner of the scheduling task of step S13, call parameter information, such as package name, class name, method name, access reference, service address, and stored procedure name of the program, is filled.
And S15, selecting a program execution mode of the scheduling task, wherein the program execution mode of the scheduling task comprises a multiple-case mode and a single-case mode.
S2, allocating hardware scheduling resources required by operation for the scheduling task; specifically, the hardware scheduling resources are allocated for the scheduling task through the following steps:
s21, registering and uniformly managing hardware resources for executing the scheduling task, wherein the hardware resources are single host resources or resource pools consisting of a plurality of host resources, and each resource pool can become an independent hardware resource unit to participate in the execution of the scheduling task;
s22, setting the bearing upper limit of the hardware resource, when the utilization rate of the hardware resource reaches the set bearing upper limit, the hardware resource does not execute the newly distributed scheduling task, and the distributed scheduling task enters the queue to be executed to wait.
S3, associating all the scheduling tasks according to a logical relation to form scheduling jobs; the motion is based on the technology of a process engine, a visual interface such as a display screen is adopted, all scheduling tasks are related according to the front and back execution sequence to form a scheduling job, and the dependency relationship, the execution sequence, the execution regulation and the communication data of each scheduling task in the scheduling job are set.
S4, establishing a scheduling strategy for each scheduling task in the scheduling operation; the scheduling strategy comprises a time strategy, a routing strategy, a blocking strategy and an exception handling strategy; wherein:
the time strategy is as follows: and setting the execution time of the scheduling task through a visual interface or a Cron expression.
Routing strategy: it refers to the rule by which the scheduling task is allocated to the scheduling resource set in S2, such as specifying host, random selection, polling allocation, load rate priority, etc., and the polling allocation includes time slice polling and weighted polling.
Blocking strategy: the request of the scheduling task to the exclusive resource can not be satisfied, or the processing strategy when the processing is not available due to the execution device which is too dense to be scheduled. For example: timeout preemption, wait for queues, etc.
Exception handling strategies: and when the scheduling task executor executes the exception, processing strategies comprise the number of failure retry times, the failure retry interval, whether to continue executing or block a subsequent program or not, and selection of the exception processing task.
And S5, issuing the scheduling job, starting execution, and providing information display of operation monitoring, statistical forms, log records and abnormal early warning through a visual interface.
Taking a scheduling project scenario of the enterprise EPR software financial module developed by the applicant as an example, the scenario has the following task requirements:
1) starting five financial sharing platform evidence-making robot tasks which respectively run on different servers at 0 point of working day every week, and compiling an executive program of the robot based on Python language;
2) after the five robot programs are operated, starting an automatic calculation and storage process of a statistical report in an Oracle database;
3) and after the calculation of the statistical form is completed, starting a fund analysis service by accessing the REST API service interface to complete the fund analysis work.
The specific configuration mode of the scheduling task issuing method of the invention for the task requirements of the scene is as follows:
s1, setting a task executor;
a) and creating a task executor A for executing the certification robot task of the financial sharing platform. And filling in relevant description information of the actuator, setting the task type as Python, calling in a reflection mode, and configuring relevant calling parameters. Since the task can run concurrently, the task mode is set to the multiple-instance mode.
b) And creating a task executor B for executing the automatic calculation task of the statistical form. And selecting a stored process task according to the task type, filling the name of the stored process, wherein the database type is Oracle. And setting the task mode as a singleton mode.
c) A task executor C is created to perform the funding analysis task. And selecting Web service according to the task type, calling in an REST mode, and filling information such as service addresses and parameters. And setting the task mode as a singleton mode.
S2, setting scheduling resources;
five servers Se1-Se5 for running scheduling tasks are respectively registered, other related information such as server IP addresses and operating system types are set, the upper limit of the resource utilization rate of each server is respectively set, and after the setting is finished, the servers Se1-Se5 form a resource pool P1.
S3, establishing a scheduling job;
a new scheduling job J1 is created and configured as follows:
a) the task executor a is selected as the first step in scheduling the job. The parameter variable a _ R1 is set to save the execution result of the task executor a, where "success" indicates successful execution and "failure" indicates failed execution.
b) Task executor B is selected as the second step of the scheduled job. Since the execution mode of the task executor a is a multi-instance mode, and multiple tasks can be executed concurrently, the execution condition of the task executor B is set such that the return result parameter a _ R1 of all instances of the task executor a has all values of "success" before the task executor B can be executed.
c) Task executor C is selected as the third step of scheduling the job. And the task executor B has no return value parameter, so the execution condition of the task executor C is set as that the task executor B completes execution and has no abnormal error report.
S4, making a scheduling strategy;
the scheduling job J1 is selected and configured as follows:
a) selecting a task executor A in the step of scheduling the job J1, setting the time strategy of the task executor A to be executed at 0 point of every Monday to Friday, and currently adopting a form of a Cron expression, wherein the value of the expression is set to be' 000? 2-6 "; the number of tasks is set to "5", which means that five programs are executed simultaneously; the routing strategy is random distribution, which means that five instances of the task executor A are randomly distributed to different five hosts to be executed.
b) Selecting a task executor B in the step of scheduling the job J1, and setting the time strategy of the task executor B as the execution end of the task executor A; the exception handling strategy is to suspend the subsequent task; and setting the pre-warning strategy to send the mail to the administrator.
c) The relevant scheduling policy settings for task executors C, C in the step of selecting scheduling job J1 are the same as those for B.
S5, issuing scheduling jobs; the scheduling job J1 completed by the above step setting is issued and execution is started.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (10)

1. A method for issuing a scheduling task is characterized in that: the method comprises the following steps:
s1, the scheduling task to be issued is delivered to the corresponding task executor to be responsible for scheduling and executing;
s2, allocating hardware scheduling resources required by operation for the scheduling task;
s3, associating all the scheduling tasks according to a logical relation to form scheduling jobs;
s4, establishing a scheduling strategy for each scheduling task in the scheduling operation;
and S5, issuing the scheduling job and starting execution.
2. The issuing method according to claim 1, characterized in that: in step S1, the process of inputting the specific execution information of the scheduling task to be issued to the task executor, and inputting the specific execution information of the scheduling task to the task executor includes the following steps:
s11, filling in the name, function and version description information of the scheduling task;
s12, selecting the type of the scheduling task;
s13, setting a scheduling task calling mode;
s14, setting parameter information of a scheduling task;
and S15, selecting a program execution mode of the scheduling task.
3. The issuing method according to claim 2, characterized in that: in step S12, the types of the scheduling task include a program task and a database task, the types of the program task include NET, Java, Python, Golang, C + +, Shell, and EXE, and the database task includes SQL statements, stored procedures, and database jobs.
4. The issuing method according to claim 2, characterized in that: in step S13, the scheduling task is called by a program reflection call and a Web service access, where the Web service access includes a SOAP type service and a REST type service.
5. The issuing method according to claim 1, characterized in that: in step S2, the process of allocating the hardware scheduling resource required for operation to the scheduling task includes the following steps:
s21, registering and uniformly managing hardware resources for executing scheduling tasks, wherein the hardware resources are single host resources or a resource pool consisting of a plurality of host resources;
s22, setting the bearing upper limit of the hardware resource, when the utilization rate of the hardware resource reaches the set bearing upper limit, the hardware resource does not execute the newly distributed scheduling task, and the distributed scheduling task enters the queue to be executed to wait.
6. The issuing method according to claim 1, characterized in that: in step S3, based on the process engine, a visual interface is used to associate all scheduling tasks according to the front-back execution sequence, and the dependency relationship, execution sequence, execution adjustment, and communication data of each scheduling task in the scheduling job are set.
7. The issuing method according to claim 1, characterized in that: in step S4, the scheduling policy includes a time policy, a routing policy, a blocking policy, and an exception handling policy; wherein:
the time strategy is used for setting the execution time of each scheduling task;
the routing strategy is used for distributing and scheduling the distribution rule of the hardware resources for each scheduling task;
the blocking strategy is used for processing rules when the hardware resources of the scheduling task do not meet the requirements;
the exception handling strategy is used for handling rules when the task executor has execution exception.
8. A system for issuing a scheduled task, comprising: the system comprises a task executor, scheduling resources, a flow engine, a scheduling policy set and a release unit, wherein:
the task executor is used for storing execution information of corresponding scheduling tasks and driving the scheduling tasks to be executed;
the scheduling resources are used for providing hardware resource support for the task execution work;
the flow engine is used for establishing a logic sequence relation for each scheduling task and combining the scheduling tasks into scheduling operation;
the scheduling policy set is used for defining logic rules for the operation of scheduling tasks;
the issuing unit is used for issuing scheduling jobs and starting operation.
9. The scheduled task issuing system according to claim 8, wherein: the execution information of the scheduling task comprises basic information of the scheduling task, the type of the scheduling task, the calling mode of the scheduling task, parameter information of the scheduling task and a program execution mode of the scheduling task, wherein the basic information of the scheduling task comprises name, function and version description information.
10. The scheduled task issuing system according to claim 9, wherein: the scheduling policy set comprises a time policy, a routing policy, a blocking policy and an exception handling policy; wherein:
the time strategy is used for setting the execution time of each scheduling task;
the routing strategy is used for distributing and scheduling the distribution rule of the hardware resources for each scheduling task;
the blocking strategy is used for processing rules when the hardware resources of the scheduling task do not meet the requirements;
the exception handling strategy is used for handling rules when the task executor has execution exception.
CN202010024753.1A 2020-01-10 2020-01-10 Dispatching task issuing system and method Pending CN111240819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010024753.1A CN111240819A (en) 2020-01-10 2020-01-10 Dispatching task issuing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010024753.1A CN111240819A (en) 2020-01-10 2020-01-10 Dispatching task issuing system and method

Publications (1)

Publication Number Publication Date
CN111240819A true CN111240819A (en) 2020-06-05

Family

ID=70870944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010024753.1A Pending CN111240819A (en) 2020-01-10 2020-01-10 Dispatching task issuing system and method

Country Status (1)

Country Link
CN (1) CN111240819A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694671A (en) * 2020-06-12 2020-09-22 北京金山云网络技术有限公司 Big data component management method, device, server, electronic equipment and system
CN112000722A (en) * 2020-08-17 2020-11-27 杭州数云信息技术有限公司 Real-time heterogeneous source data synchronization system and synchronization method
CN117472530A (en) * 2023-10-25 2024-01-30 上海宽睿信息科技有限责任公司 Centralized management-based data intelligent scheduling method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060095914A1 (en) * 2004-10-01 2006-05-04 Serguei Mankovski System and method for job scheduling
US20080120619A1 (en) * 2006-11-16 2008-05-22 Sun Microsystems, Inc. Real time monitoring and tracing of scheduler decisions
CN106027617A (en) * 2016-05-11 2016-10-12 广东浪潮大数据研究有限公司 Method for implementing dynamic scheduling of tasks and resources in private cloud environment
CN107818112A (en) * 2016-09-13 2018-03-20 腾讯科技(深圳)有限公司 A kind of big data analysis operating system and task submit method
CN108845878A (en) * 2018-05-08 2018-11-20 南京理工大学 The big data processing method and processing device calculated based on serverless backup
CN109086986A (en) * 2018-07-20 2018-12-25 中国邮政储蓄银行股份有限公司 job scheduling method and device
CN109814995A (en) * 2019-01-04 2019-05-28 深圳壹账通智能科技有限公司 Method for scheduling task, device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060095914A1 (en) * 2004-10-01 2006-05-04 Serguei Mankovski System and method for job scheduling
US20080120619A1 (en) * 2006-11-16 2008-05-22 Sun Microsystems, Inc. Real time monitoring and tracing of scheduler decisions
CN106027617A (en) * 2016-05-11 2016-10-12 广东浪潮大数据研究有限公司 Method for implementing dynamic scheduling of tasks and resources in private cloud environment
CN107818112A (en) * 2016-09-13 2018-03-20 腾讯科技(深圳)有限公司 A kind of big data analysis operating system and task submit method
CN108845878A (en) * 2018-05-08 2018-11-20 南京理工大学 The big data processing method and processing device calculated based on serverless backup
CN109086986A (en) * 2018-07-20 2018-12-25 中国邮政储蓄银行股份有限公司 job scheduling method and device
CN109814995A (en) * 2019-01-04 2019-05-28 深圳壹账通智能科技有限公司 Method for scheduling task, device, computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694671A (en) * 2020-06-12 2020-09-22 北京金山云网络技术有限公司 Big data component management method, device, server, electronic equipment and system
CN111694671B (en) * 2020-06-12 2023-09-01 北京金山云网络技术有限公司 Big data component management method, device, server, electronic equipment and system
CN112000722A (en) * 2020-08-17 2020-11-27 杭州数云信息技术有限公司 Real-time heterogeneous source data synchronization system and synchronization method
CN117472530A (en) * 2023-10-25 2024-01-30 上海宽睿信息科技有限责任公司 Centralized management-based data intelligent scheduling method and system
CN117472530B (en) * 2023-10-25 2024-04-05 上海宽睿信息科技有限责任公司 Centralized management-based data intelligent scheduling method and system

Similar Documents

Publication Publication Date Title
US10540351B2 (en) Query dispatch and execution architecture
CN112379995B (en) DAG-based unitized distributed scheduling system and method
US7689989B2 (en) Thread monitoring using shared memory
WO2020228534A1 (en) Micro-service component-based database system and related method
US8789058B2 (en) System and method for supporting batch job management in a distributed transaction system
CN101694709B (en) Service-oriented distributed work flow management system
EP1679602B1 (en) Shared memory based monitoring for application servers
US20070283351A1 (en) Unified job processing of interdependent heterogeneous tasks
US8112526B2 (en) Process migration based on service availability in a multi-node environment
CN111240819A (en) Dispatching task issuing system and method
CN108762900A (en) High frequency method for scheduling task, system, computer equipment and storage medium
JP2016106329A (en) Transactional graph-based computation with error handling
CN111682973B (en) Method and system for arranging edge cloud
US8250131B1 (en) Method and apparatus for managing a distributed computing environment
CN101464810A (en) Service program processing method and server
US7836448B1 (en) System and methods for task management
CN101751288A (en) Method, device and system applying process scheduler
US20090319662A1 (en) Process Migration Based on Exception Handling in a Multi-Node Environment
CN113485812B (en) Partition parallel processing method and system based on large-data-volume task
US8458716B2 (en) Enterprise resource planning with asynchronous notifications of background processing events
US20200301737A1 (en) Configurable data parallelization method and system
WO2024037132A1 (en) Workflow processing method and apparatus, and device, storage medium and program product
CN114787836A (en) System and method for remotely executing one or more arbitrarily defined workflows
CN110162381A (en) Proxy executing method in a kind of container
CN115858499A (en) Database partition processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605