CN112612584A - Task scheduling method and device, storage medium and electronic equipment - Google Patents

Task scheduling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112612584A
CN112612584A CN202011488716.2A CN202011488716A CN112612584A CN 112612584 A CN112612584 A CN 112612584A CN 202011488716 A CN202011488716 A CN 202011488716A CN 112612584 A CN112612584 A CN 112612584A
Authority
CN
China
Prior art keywords
task
job
processed
queue
judging whether
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011488716.2A
Other languages
Chinese (zh)
Inventor
朱凯
潘登
张锐
李昂
胡艺
陈泽华
王涛
陈凯
库生玉
周华巍
黄剑
杜永亮
郭晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuanguang Software Co Ltd
Original Assignee
Yuanguang Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuanguang Software Co Ltd filed Critical Yuanguang Software Co Ltd
Priority to CN202011488716.2A priority Critical patent/CN112612584A/en
Publication of CN112612584A publication Critical patent/CN112612584A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to the field of task scheduling and discloses a task scheduling method, a task scheduling device, a task scheduling storage medium and a task scheduling system. The method of the present application comprises: receiving a task set; wherein the task set comprises a plurality of tasks; judging whether the task set is positioned in a jobwait queue; if not, packaging the task set to obtain a job to be processed; judging whether the job to be processed is positioned in the job running queue; if not, continuously judging whether the job running queue has enough residual space; if so, putting the job to be processed into a job running queue; according to the method and the device, the job to be processed is taken out from the job running queue according to the first-in first-out sequence, and the job to be processed is executed.

Description

Task scheduling method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of task scheduling, and in particular, to a task scheduling method, apparatus, storage medium, and electronic device.
Background
In a big data multidimensional analysis system, the scheduling execution of tasks is a frequently used function, and the old task scheduling execution program used previously only supports the timing scheduling execution of a single node and a single task. In the related projects of a plurality of current big data multidimensional analysis systems, the task scheduling is urgently needed to be completed efficiently, quickly and accurately by means of a scheduling program under the distributed high-availability and multi-thread environment, and the task scheduling efficiency is improved.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a task scheduling method, a task scheduling device, a storage medium, and an electronic device, so as to improve task scheduling efficiency and reduce task congestion.
In a first aspect, the present application provides a task scheduling method, including:
receiving a task set; wherein the task set comprises a plurality of tasks;
judging whether the task set is positioned in a job waiting queue or not;
if not, packaging the task set to obtain a job to be processed;
judging whether the to-be-processed jobs are located in a job running queue or not;
if not, continuously judging whether the jobrun queue has enough residual space;
if so, putting the to-be-processed jobs into the job running queue;
and taking out the to-be-processed jobs from the jobrunning queue according to the first-in first-out sequence, and executing the to-be-processed jobs.
In a second aspect, the present application provides a task scheduling apparatus, including:
a transceiving unit for receiving a task set; wherein the task set comprises a plurality of tasks;
the processing unit is used for judging whether the task set is positioned in a jobwait queue;
if not, packaging the task set to obtain a job to be processed;
judging whether the to-be-processed jobs are located in a job running queue or not;
if not, continuously judging whether the jobrun queue has enough residual space;
if so, putting the to-be-processed jobs into the job running queue;
and taking out the to-be-processed jobs from the jobrunning queue according to the first-in first-out sequence, and executing the to-be-processed jobs.
In a third aspect, the present application proposes a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of claims 1 to 7.
In a fourth aspect, the present application proposes a task scheduling apparatus comprising a processor and a memory, the memory being configured to store a computer program or instructions, the processor being configured to execute the computer program or instructions in the memory to implement the method according to any one of claims 1 to 7.
When a task set is received, when the task set is not located in a jobb waiting queue, the task set is packaged into a to-be-processed jobb, when the to-be-processed jobb is not located in the jobb running queue and the jobb running queue has enough space, the to-be-processed jobb is placed into the jobb running queue, and then the to-be-processed jobb is taken out in a first-in first-out mode for execution.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1 is a schematic diagram of an architecture of a task scheduling system according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a task scheduling method according to an embodiment of the present application;
fig. 3 is another schematic flowchart of a task scheduling method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of enhanced execution job of an embodiment of the present application;
fig. 5 is a schematic structural diagram of a task scheduling apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a task scheduling apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the embodiments of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Referring to fig. 1, a schematic structural diagram of a task scheduling system provided in an embodiment of the present application is shown, where the task scheduling system includes the following modules:
a service interface, an interface for communicating with an external device or apparatus, where the type of the interface includes, but is not limited to, a RESTful interface, a Webservice interface, a JDBC (Java Database Connectivity) interface, or an ODBC (Open Database Connectivity) interface. The service interface module is further to: task management, task log and state query, wherein the task management is mainly used for the operations of: executing, re-executing, forcibly stopping and task canceling; the task log is mainly used for viewing: xxl-job scheduling log in job, task execution log in executor; status queries are mainly used to view: history execution records, attributes and membership of the jobs, stage and task, and information such as time and state of task execution.
Wherein, for the job viewing execution condition, for example: task state, task name, task Chinese name, instance ID, start time, end time, owner. Operations such as: executing, re-executing, forcibly stopping and task canceling; and (5) drilling down, and checking the execution condition of the stage.
Wherein, for stage checking execution conditions, for example: task state, task name, task Chinese name, instance ID, start time, end time, owner. Operations such as: execution, re-execution, forced stop, task undo. And drilling up, and checking the execution condition of the Job. And (5) drilling down, and checking the execution condition of the task.
Wherein, for the task to view the execution condition, for example: task state, task name, task Chinese name, instance ID, start time, end time, owner. Operations such as: execution, re-execution, forced stop, task undo. And drilling up and checking the execution condition of the stage.
And the platform service is used for task management and task monitoring.
The core functions of the task scheduling system mainly include:
firstly, a plurality of tasks with dependency relationship can be processed:
(1) multiple tasks can be submitted at a time;
(2) the execution sequence can be customized;
(3) the execution sequence is as follows: and executing each stage, wherein tasks in the stages are parallel, and the stages are serial.
Secondly, maximum parallelism:
(1) multiple Jobs execute in parallel, multiple tasks in stage execute in parallel,
(2) calculating a blood margin, and paralleling an actuator and a controller;
(2) the actuators are deployed in a distributed mode, and all tasks in the actuators are parallel;
and performing at least repeatedly:
(1) a new Job enters a waiting queue, and the same Job is deduplicated in the waiting queue;
(2) when a certain task in operation is submitted again to the operation request, the repeated execution is not performed any more;
(3) if a bloodborder calculator is used to encapsulate a Job, duplicate tasks in the same Job can be avoided.
Third, minimal negative impact:
(1) the task in a certain Job fails to execute or skips to execute, and the same task in other Jobs cannot be influenced;
(2) an "execution failure" or "execution skip" for a task will only result in the "execution skip" being dependent on its own task for which
It does not correlate the execution of the task and does not affect it.
Fourthly, minimum coupling:
(1) each Job is executed in a respective thread, and multiple Jobs containing the same task run independently without sensing each other and considering no task
Repeated execution or failure;
(2) each key program runs independently without direct communication, the shared data in the program is less, and the possibility of deadlock is avoided
(3) The result data are respectively stored in a redis database, and the required program goes to the redis database to acquire the data.
Fifthly, the procedure is robust:
(1) exception capture, fault tolerance handling, failure retry, timeout handling
(2) The key data are all persistent, and the server can automatically continue to execute after being restarted;
(3) the execution condition can be checked on the interface, and operations such as're-execution', 'task cancellation', 'forced stop' and the like are carried out;
(4) distributed high availability deployment can be adopted to solve the single point problem.
The core function mainly uses a general control engine, a scheduling engine and a blood-related calculation engine.
The overall control engine mainly has the following functions:
1. control Job queue:
(1) controlling Job to wait for the queue and removing the duplicate of the Job which is not executed;
(2) and controlling the Job execution queue to manage and control the Job parallel number.
2. And (3) control execution:
(1) the execution of the parent task fails, and the execution of the child task is skipped;
(2) tasks in stages are parallel, and tasks in stages are serial,
And (3) control state:
checking the task execution result in the redis, and updating the states of the task, stage and Job;
controlling external operations:
operations such as "re-execution", "cancel task", "force stop", and the like are controlled.
Secondly, the scheduling engine has the main functions of:
various scheduling modes: various scheduling modes are supported such as: hash, first, last, etc., when encountering the same task, will return to the queue to be executed
And (3) executing distributed deployment: multiple sets of tasks can be deployed in the same task and executed according to a scheduling mode
Support for an automatic registration executor: automatic registration of actuators into actuator list
Monitoring in the whole process: recording the whole scheduling, execution log
Thirdly, the blood relationship calculation engine mainly has the following functions:
acquiring an association relation: and acquiring the association relation among all tasks by acquiring the callback function of the child node and combining a depth-first traversal algorithm.
Acquiring all routes: and acquiring the route formed by all tasks through a routing algorithm.
Obtaining the relation between the maximum step length and task: and finding out the maximum step size of the task in all routes, and converting the maximum step size into the Map of the maximum step size and the task.
Packaging Job: and putting the tasks with the same maximum step length in the same stage according to the sequence of the maximum step length from small to large.
Among other things, zookeeper may provide reliable publish/subscribe, coordinate/notify, distributed lock, etc. functionality for distributed applications. According to the invention, the zookeeper is used for intensively storing the arrangement information and the state information of the task, so that the problem of inconsistent task information and state searched by a plurality of programs when deployment is highly available is solved. Therefore, the problems that tasks are repeatedly executed and states are modified and disordered when the same task is processed by the schedulers on a plurality of nodes or different threads in the same scheduler are well avoided.
The task scheduling system in the application may include one or more servers, where the server may be a rack server, a blade server, a tower server, or a rack server, and the server may be an independent server or a server cluster formed by a plurality of servers.
Referring to fig. 2, fig. 2 is a schematic flowchart of a task scheduling method according to an embodiment of the present application, where the method includes, but is not limited to, the following steps:
s201, receiving a task set.
The task set comprises a plurality of tasks which are arranged in advance, and the tasks can form a directed acyclic graph.
S202, judging whether the task set is located in the job waiting queue.
The job waiting queue comprises a plurality of to-be-executed job, each job comprises a plurality of tasks, and whether the plurality of tasks included in the task set are located in the job waiting queue is judged. The job waits for the job in the queue to dequeue and enqueue in a first-in-first-out manner.
S203, if not, packaging the task set to obtain a job to be processed.
If the task set is not located in the job waiting queue, the task set is packaged to obtain a job to be processed, namely, the blood margin calculation engine in fig. 1 is called to package a plurality of tasks in the task set into an integral job.
And S204, judging whether the job to be processed is positioned in the job running queue.
The job running queue comprises a plurality of running jobs, and the jobs in the job running queue are dequeued and enqueued in a first-in first-out mode.
And S205, if not, continuously judging whether the job running queue has enough residual space.
If the jon to be processed is not located in the job running queue, whether the job running queue has enough remaining space is judged, and the sizes of all job in the job running queue are the same, namely whether at least one space capable of storing the job exists in the job running queue is judged.
S206, if so, putting the to-be-processed jobinto the jobrunning queue.
And putting the job to be processed into the job running queue in a first-in first-out mode.
S207, taking out the to-be-processed jobs from the jobrunning queue according to the first-in first-out sequence, and executing the to-be-processed jobs.
And when the job to be processed is positioned at the queue head of the job running queue, taking the job to be processed out of the job running queue, and executing the job to be processed through the executor.
By implementing the embodiment, when the task set is received and the task set is not located in the job waiting queue, the task set is packaged into the job to be processed, the job to be processed is not located in the job running queue and the job running queue has enough space, the job to be processed is put into the job running queue, and then the job to be processed is taken out in a first-in first-out mode for execution.
Referring to fig. 3, another schematic flow chart of a task scheduling method provided in an embodiment of the present application is shown, where in the embodiment of the present application, the method includes, but is not limited to, the following steps:
s301, receiving a task set.
The task set comprises a plurality of tasks which are arranged in advance, and the tasks can form a directed acyclic graph.
S302, judging whether the task set is located in the job waiting queue.
If the judgment result of the step S302 is yes, executing a step S303; if the determination result in S302 is no, S304 is executed.
And S303, updating the time information of the jobwait queue.
The time information represents the waiting time of the job waiting queue, and when the task set is located in the job waiting queue, the job to be processed can be directly taken out from the job waiting queue.
And S304, packaging the task set to obtain a job to be processed.
And the step of packaging comprises the step of placing tasks with the same maximum step length in the same jobs according to the sequence of the maximum step length from small to large, namely the to-be-processed jobs comprise the tasks with the same maximum step length.
S305, judging whether the job to be processed is positioned in the job running queue.
If the judgment result of S305 is yes, S306 is executed; if the determination result in S305 is no, S307 is executed.
S307, judging whether the job running queue has enough residual space.
If the judgment result in S307 is yes, S308 is executed; if the determination result in S307 is no, S306 is executed. Enough remaining space represents storage space that can store the job to be processed.
And S308, putting the job to be processed into a job running queue.
And according to the first-in first-out sequence, placing the job to be processed at the tail of the job running queue.
S309, taking out the job to be processed from the job running queue according to the first-in first-out sequence, and executing the job to be processed.
When the job to be processed is located at the queue head of the job running queue, the job to be processed is taken out from the job running queue, and the job to be processed is executed, and the process of executing the job to be processed can be seen in fig. 4.
S310, determining that execution of the job to be processed is finished.
And determining whether the job to be processed is executed or not according to the status flag bit of the job to be processed.
S311, judging whether the job waiting queue is an empty queue or not.
Wherein the empty queue represents that the number of the jobs in the jobs waiting queue is zero
And S312, ending the flow.
S313, taking out one jobfrom the jobwaiting queue according to the first-in first-out sequence and putting the jobinto the jobrunning queue.
Further, referring to fig. 4, a schematic flow diagram for executing a job according to an embodiment of the present application is provided, where in the embodiment, the method for executing a job includes:
s401, determining a plurality of stages included by the job to be processed.
Wherein, the relationship among jobs, stage and task is as follows: one jobs comprises a plurality of stages, the plurality of stages are executed in a parallel mode, one stage comprises a plurality of tasks, the plurality of tasks in the same stage are executed in a serial mode with dependency relationship, and the tasks without dependency relationship are executed in a parallel mode.
S402, determining to create a thread for the job to be processed, and creating a thread for each stage.
S403, determining the current task of the current stage in the plurality of stages.
The job to be processed comprises a plurality of stages, the plurality of stages are sequentially executed, one of the plurality of stages is taken as the current stage, and the execution of all the stages in the job is finished; one stage comprises a plurality of tasks, and one of the plurality of tasks is taken as the current task until all the tasks in the stage are executed.
S404, judging whether the current task is located in the running task list.
S405, judging whether the current task has a parent task which fails to be executed.
The multiple tasks included by the current stage form a directed acyclic graph, the multiple tasks have a hierarchical structure, the highest task is a root task, and the father task is a previous task of the current task.
S406, judging whether the current task and the parent task which fails to execute are located in the same jobs.
And S407, exiting the execution of the current stage.
S408, judging whether the current task has a parent task which is not executed or partially executed.
And S409, judging whether the current task and the parent task which is not executed and partially executed are positioned in the same jobs.
And S410, adding the current task into the running task list.
And S411, executing the current task.
And S412, updating the current task state.
And updating the state of the current task to be a normal execution state.
And S413, removing the current task from the running task list.
And S414, judging whether the current task execution is successful.
And judging whether the current task is successfully executed according to the state flag bit of the current task.
S415, the next task of the current task is taken.
Wherein, the next task taken out is used as the current task of the current stage, and then the flowchart of fig. 4 is continued until all the tasks in the current stage are successfully executed.
By implementing the embodiment, when the task set is received and the task set is not located in the job waiting queue, the task set is packaged into the job to be processed, the job to be processed is not located in the job running queue and the job running queue has enough space, the job to be processed is put into the job running queue, and then the job to be processed is taken out in a first-in first-out mode for execution.
The foregoing has described a task scheduling method in the embodiment of the present application in detail, and a task scheduling apparatus (hereinafter referred to as apparatus 5) in the embodiment of the present application is provided below.
The device 5 shown in fig. 5 may implement the task scheduling method of the embodiments shown in fig. 2 to fig. 5, where the device 5 includes: a transceiving unit 501 and a processing unit 502.
A transceiving unit 501, configured to receive a task set; wherein the task set comprises a plurality of tasks;
a processing unit 502, configured to determine whether the task set is located in a job waiting queue;
if not, packaging the task set to obtain a job to be processed;
judging whether the to-be-processed jobs are located in a job running queue or not;
if not, continuously judging whether the jobrun queue has enough residual space;
if so, putting the to-be-processed jobs into the job running queue;
taking out the job to be processed from the job running queue according to the order of first-in first-out, and executing the job to be processed
In one or more embodiments, the executing the to-be-processed job by the processing unit 501 includes:
determining a plurality of stages included by the job to be processed;
creating a thread for the job to be processed and creating a thread for each stage;
determining a current task of a current stage in the plurality of stages;
judging whether the current task is located in an operation task list or not;
if not, judging whether the current task has a parent task which fails to be executed;
if not, judging whether the current task has a parent task which is not executed or partially executed;
if not, adding the current task into the running task list;
executing the current task;
updating the state of the current task;
judging whether the current task is executed successfully;
if yes, taking out the next task from the current stage.
In one or more embodiments, processing unit 501 is further configured to:
when the to-be-processed jobs are located in the job running queue, putting the to-be-processed jobs into the job waiting queue; or
And when the job running queue does not have enough remaining space, putting the job to be processed into the job waiting queue.
In one or more embodiments, processing unit 501 is further configured to:
after the execution of the job to be processed is finished, judging whether the job waiting queue is an empty queue or not;
if so, ending the process;
and if not, taking out one job from the job waiting queue according to a first-in first-out mode, and putting the taken-out job into the job running queue.
In one or more embodiments, processing unit 501 is further configured to:
when the current task has a father task with execution failure, judging whether the current task and the father task with execution failure belong to the same joba, if so, finishing the execution of the current stage; and if not, executing the step of judging whether the current task has a parent task which is not executed or partially executed.
In one or more embodiments, processing unit 501 is further configured to:
the current task has a parent task which is not executed or partially executed, whether the current task and the parent task which is not executed or partially executed belong to the same joba is judged, and if yes, the execution of the current stage is finished; and if not, adding the current task into the running task list.
In one or more embodiments, multiple stages belonging to the same joba execute in parallel; for multiple tasks belonging to the same stage, the multiple tasks with dependency relationship are executed in serial mode, and the multiple tasks without dependency relationship are executed in parallel mode.
The embodiment of the present application and the method embodiments of fig. 2 to 4 are based on the same concept, and the technical effects brought by the embodiment are also the same, and the specific process may refer to the description of the method embodiments of fig. 2 to 4, and will not be described again here.
The device 5 may be a field-programmable gate array (FPGA), an application-specific integrated chip, a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit, a Micro Controller Unit (MCU), or a Programmable Logic Device (PLD) or other integrated chips.
The foregoing has set forth in detail a task scheduling method according to an embodiment of the present application, and a task scheduling apparatus according to an embodiment of the present application is provided below.
Fig. 6 is a schematic structural diagram of a task scheduling apparatus provided in an embodiment of the present application, which is hereinafter referred to as an apparatus 6 for short, where the apparatus 6 may be integrated in a management server in the foregoing embodiment, as shown in fig. 6, the apparatus includes: memory 602, processor 601, transmitter 604, and receiver 603.
The memory 602 may be a separate physical unit, which may be connected to the processor 601, the transmitter 604 and the receiver 603 via a bus. The memory 602, processor 601, transmitter 604, and receiver 601 may also be integrated, implemented in hardware, etc.
The transmitter 604 is used for transmitting signals and the receiver 603 is used for receiving signals.
The memory 602 is used for storing a program for implementing the above method embodiment, or various modules of the apparatus embodiment, and the processor 601 calls the program to execute the above operations of the embodiments in fig. 2 to 3.
Alternatively, when part or all of the task scheduling method of the above embodiments is implemented by software, the apparatus may also include only a processor. The memory for storing the program is located outside the device and the processor is connected to the memory by means of circuits/wires for reading and executing the program stored in the memory.
The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory may also comprise a combination of memories of the kind described above.
In the above embodiments, the transmitting unit or the transmitter performs the steps of transmitting in the above respective method embodiments, the receiving unit or the receiver performs the steps of receiving in the above respective method embodiments, and other steps are performed by other units or processors. The transmitting unit and the receiving unit may constitute a transceiving unit, and the receiver and the transmitter may constitute a transceiver.
The embodiment of the present application further provides a computer storage medium, which stores a computer program, where the computer program is used to execute the task scheduling method provided by the foregoing embodiment.
The embodiment of the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the task scheduling method provided by the above embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (10)

1. A method for task scheduling, comprising:
receiving a task set; wherein the task set comprises a plurality of tasks;
judging whether the task set is positioned in a job waiting queue or not;
if not, packaging the task set to obtain a job to be processed;
judging whether the to-be-processed jobs are located in a job running queue or not;
if not, continuously judging whether the jobrun queue has enough residual space;
if so, putting the to-be-processed jobs into the job running queue;
and taking out the to-be-processed jobs from the jobrunning queue according to the first-in first-out sequence, and executing the to-be-processed jobs.
2. The method of claim 1, wherein the executing the to-be-processed job comprises:
determining a plurality of stages included by the job to be processed;
creating a thread for the job to be processed and creating a thread for each stage;
determining a current task of a current stage in the plurality of stages;
judging whether the current task is located in an operation task list or not;
if not, judging whether the current task has a parent task which fails to be executed;
if not, judging whether the current task has a parent task which is not executed or partially executed;
if not, adding the current task into the running task list;
executing the current task;
updating the state of the current task;
judging whether the current task is executed successfully;
if yes, taking out the next task from the current stage.
3. The method of claim 1, further comprising:
when the to-be-processed jobs are located in the job running queue, putting the to-be-processed jobs into the job waiting queue; or
And when the job running queue does not have enough remaining space, putting the job to be processed into the job waiting queue.
4. The method of claim 1, further comprising:
after the execution of the job to be processed is finished, judging whether the job waiting queue is an empty queue or not;
if so, ending the process;
and if not, taking out one job from the job waiting queue according to a first-in first-out mode, and putting the taken-out job into the job running queue.
5. The method of claim 2, further comprising:
when the current task has a father task with execution failure, judging whether the current task and the father task with execution failure belong to the same joba, if so, finishing the execution of the current stage; and if not, executing the step of judging whether the current task has a parent task which is not executed or partially executed.
6. The method of claim 2, further comprising:
the current task has a parent task which is not executed or partially executed, whether the current task and the parent task which is not executed or partially executed belong to the same joba is judged, and if yes, the execution of the current stage is finished; and if not, adding the current task into the running task list.
7. The method according to any one of claims 1 to 6, characterized in that a plurality of stages belonging to the same joba are executed in parallel; for multiple tasks belonging to the same stage, the multiple tasks with dependency relationship are executed in serial mode, and the multiple tasks without dependency relationship are executed in parallel mode.
8. A task scheduling apparatus, comprising:
a transceiving unit for receiving a task set; wherein the task set comprises a plurality of tasks;
the processing unit is used for judging whether the task set is positioned in a jobwait queue;
if not, packaging the task set to obtain a job to be processed;
judging whether the to-be-processed jobs are located in a job running queue or not;
if not, continuously judging whether the jobrun queue has enough residual space;
if so, putting the to-be-processed jobs into the job running queue;
and taking out the to-be-processed jobs from the jobrunning queue according to the first-in first-out sequence, and executing the to-be-processed jobs.
9. A computer storage medium comprising a computer program or instructions which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7.
10. An electronic device comprising a processor and a memory for storing a computer program or instructions, the processor being configured to execute the computer program or instructions in the memory to implement the method of any one of claims 1 to 7.
CN202011488716.2A 2020-12-16 2020-12-16 Task scheduling method and device, storage medium and electronic equipment Pending CN112612584A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011488716.2A CN112612584A (en) 2020-12-16 2020-12-16 Task scheduling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011488716.2A CN112612584A (en) 2020-12-16 2020-12-16 Task scheduling method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112612584A true CN112612584A (en) 2021-04-06

Family

ID=75239746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011488716.2A Pending CN112612584A (en) 2020-12-16 2020-12-16 Task scheduling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112612584A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017101475A1 (en) * 2015-12-15 2017-06-22 深圳市华讯方舟软件技术有限公司 Query method based on spark big data processing platform
CN107168779A (en) * 2017-03-31 2017-09-15 咪咕互动娱乐有限公司 A kind of task management method and system
US20180349178A1 (en) * 2014-08-14 2018-12-06 Import.Io Limited A method and system for scalable job processing
CN110008013A (en) * 2019-03-28 2019-07-12 东南大学 A kind of Spark method for allocating tasks minimizing operation completion date
US20200034203A1 (en) * 2018-07-30 2020-01-30 Lendingclub Corporation Distributed job framework and task queue
CN110928653A (en) * 2019-10-24 2020-03-27 浙江大搜车软件技术有限公司 Cross-cluster task execution method and device, computer equipment and storage medium
CN110928655A (en) * 2019-11-11 2020-03-27 深圳前海微众银行股份有限公司 Task processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180349178A1 (en) * 2014-08-14 2018-12-06 Import.Io Limited A method and system for scalable job processing
WO2017101475A1 (en) * 2015-12-15 2017-06-22 深圳市华讯方舟软件技术有限公司 Query method based on spark big data processing platform
CN107168779A (en) * 2017-03-31 2017-09-15 咪咕互动娱乐有限公司 A kind of task management method and system
US20200034203A1 (en) * 2018-07-30 2020-01-30 Lendingclub Corporation Distributed job framework and task queue
CN110008013A (en) * 2019-03-28 2019-07-12 东南大学 A kind of Spark method for allocating tasks minimizing operation completion date
CN110928653A (en) * 2019-10-24 2020-03-27 浙江大搜车软件技术有限公司 Cross-cluster task execution method and device, computer equipment and storage medium
CN110928655A (en) * 2019-11-11 2020-03-27 深圳前海微众银行股份有限公司 Task processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
华章出版社: "《循序渐进学Spark》-3.2 Spark调度机制", pages 1 - 2, Retrieved from the Internet <URL:https://developer.aliyun.com/article/87609> *

Similar Documents

Publication Publication Date Title
CN106802826B (en) Service processing method and device based on thread pool
CN107957903B (en) Asynchronous task scheduling method, server and storage medium
US6643802B1 (en) Coordinated multinode dump collection in response to a fault
US20130160028A1 (en) Method and apparatus for low latency communication and synchronization for multi-thread applications
CN101996214B (en) Method and device for processing database operation request
CN111782360A (en) Distributed task scheduling method and device
CN107608860B (en) Method, device and equipment for classified storage of error logs
CN113760513A (en) Distributed task scheduling method, device, equipment and medium
CN106020984B (en) Method and device for creating process in electronic equipment
CN112181748A (en) Concurrent test method, device, equipment and storage medium based on ring queue
CN112860412B (en) Service data processing method and device, electronic equipment and storage medium
CN116090382B (en) Time sequence report generation method and device
CN112612584A (en) Task scheduling method and device, storage medium and electronic equipment
CN111400073B (en) Formalized system model conversion and reliability analysis method from automobile open architecture system to unified software and hardware representation
CN112965782A (en) Intelligent monitoring method and device for Docker container, storage medium and electronic equipment
CN116450496A (en) System performance test method, device, computer equipment and storage medium
CN115499493A (en) Asynchronous transaction processing method and device, storage medium and computer equipment
CN109710275B (en) Software unloading system and method for distributed cluster
CN111352752B (en) System, method and device for processing semiconductor test data and server
Chen et al. Development of a cyber-physical-style continuous yield improvement system for manufacturing industry
CN116579466B (en) Reservation method and reservation device in wafer processing process
CN114942801B (en) FSM-based application release task processing method and device and electronic equipment
CN112256409B (en) Task execution method and device based on multiple database accelerators
CN114968274B (en) Method and system for automatically and rapidly deploying front-end processor based on gray release
CN112860780B (en) Data export method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination