CN113986507A - Job scheduling method and device, computer equipment and storage medium - Google Patents

Job scheduling method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113986507A
CN113986507A CN202111281820.9A CN202111281820A CN113986507A CN 113986507 A CN113986507 A CN 113986507A CN 202111281820 A CN202111281820 A CN 202111281820A CN 113986507 A CN113986507 A CN 113986507A
Authority
CN
China
Prior art keywords
job
scheduling
executed
jobs
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111281820.9A
Other languages
Chinese (zh)
Inventor
韦帅
莫兆忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Jiyan Zhilian Technology Co ltd
Original Assignee
Foshan Jiyan Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Jiyan Zhilian Technology Co ltd filed Critical Foshan Jiyan Zhilian Technology Co ltd
Priority to CN202111281820.9A priority Critical patent/CN113986507A/en
Publication of CN113986507A publication Critical patent/CN113986507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The embodiment of the invention discloses a job scheduling method, a job scheduling device, computer equipment and a storage medium. The method comprises the following steps: constructing job scheduling strategies with different service requirements and thread pools for processing jobs in parallel; acquiring a set of jobs to be executed; determining a service requirement corresponding to each job in a job set to be executed; determining a job scheduling strategy corresponding to each job; sending a job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result; determining the priority level of each job in a job set to be executed; and starting the thread of the space in the thread pool, and executing the corresponding operation in a dynamic proxy mode according to the scheduling result and the priority level of each operation. By implementing the method of the embodiment of the invention, the multithreading parallel processing operation can be realized, and the performance of executing the operation is improved according to the set operation scheduling strategy.

Description

Job scheduling method and device, computer equipment and storage medium
Technical Field
The present invention relates to computers, and more particularly, to a job scheduling method, apparatus, computer device, and storage medium.
Background
In the work of a computer, a huge amount of jobs need to be processed, the existing processing mode is to use a scheduling algorithm for ordered processing, however, when the current scheduling algorithm is used for processing jobs, one job is executed by one thread, the scheduling policy of the job is single, and different service requirements cannot be met, for example, the scheduling policy only starts the thread to execute in sequence according to the creation time of the job, and cannot be executed according to the importance degree of the job; and a reflection calling method is adopted when executing the operation, but the method easily causes low overall performance.
Therefore, it is necessary to design a new job scheduling method to implement multi-thread parallel processing job and improve the performance when executing the job according to the set job scheduling policy.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a job scheduling method, a job scheduling device, computer equipment and a storage medium.
In order to achieve the purpose, the invention adopts the following technical scheme: the job scheduling method comprises the following steps:
constructing job scheduling strategies with different service requirements and thread pools for processing jobs in parallel;
acquiring a set of jobs to be executed;
determining a service requirement corresponding to each job in the job set to be executed so as to obtain a target service requirement corresponding to each job;
determining a job scheduling strategy corresponding to each job according to the target service requirement corresponding to each job;
sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result;
determining the priority level of each job in the job set to be executed;
and starting the thread of the space in the thread pool, and executing the corresponding operation in a dynamic proxy mode according to the scheduling result and the priority level of each operation.
The further technical scheme is as follows: the job scheduling strategy comprises job identification numbers corresponding to different service requirements, scheduling center ID numbers corresponding to different service requirements and execution sequences of jobs with different priority levels.
The further technical scheme is as follows: the determining the service requirement corresponding to each job in the job set to be executed to obtain the target service requirement corresponding to each job includes:
acquiring an identification number of each job in the execution job set;
and determining the service requirement corresponding to the operation according to the identification number so as to obtain the target service requirement corresponding to each operation.
The further technical scheme is as follows: the sending the job scheduling policy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel by the scheduling center cluster according to the fragments to obtain a scheduling result includes:
and sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, wherein the scheduling center cluster divides a table in a job database formed by the jobs by adopting a database-splitting and table-splitting technology to form a plurality of fragments, and sends the fragments to the corresponding scheduling center according to the serial numbers of the fragments to execute scheduling tasks in parallel so as to obtain a scheduling result.
The further technical scheme is as follows: the determining the priority level of each job in the set of jobs to be executed includes:
dividing the operation in the operation set to be executed into operation sets with different levels according to the priority level of the service requirement;
the jobs in the job set of each level are sorted in order of their priority level from high to low.
The further technical scheme is as follows: the starting thread of the space in the thread pool and executing the corresponding operation in a dynamic proxy mode according to the scheduling result and the priority level of each operation comprises the following steps:
acquiring idle threads in the thread pool to obtain available threads;
and dynamically calling the corresponding business service according to the job scheduling strategy and scheduling result corresponding to each job and the priority level of each job, and executing the corresponding jobs in parallel by the business service in a dynamic proxy mode.
The further technical scheme is as follows: and the scheduling result comprises a business service ID number in the service cluster corresponding to the execution job.
The present invention also provides a job scheduling apparatus, including:
the construction unit is used for constructing job scheduling strategies with different service requirements and thread pools for processing jobs in parallel;
the system comprises a set acquisition unit, a task execution unit and a task execution unit, wherein the set acquisition unit is used for acquiring a set of to-be-executed jobs;
the demand determining unit is used for determining the business demand corresponding to each job in the job set to be executed so as to obtain the target business demand corresponding to each job;
the strategy determining unit is used for determining a job scheduling strategy corresponding to each job according to the target service requirement corresponding to each job;
the sending unit is used for sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result;
the priority level determining unit is used for determining the priority level of each job in the job set to be executed;
and the execution unit is used for starting the thread in the space in the thread pool and executing the corresponding job according to the scheduling result and the priority level of each job.
The invention also provides computer equipment which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the method when executing the computer program.
The invention also provides a storage medium storing a computer program which, when executed by a processor, is operable to carry out the method as described above.
Compared with the prior art, the invention has the beneficial effects that: the invention adopts the mode of determining the operation scheduling strategy by the service requirement to the operation set to be executed by constructing various operation scheduling strategies and the thread pool capable of processing the operation in parallel, and the task is divided and scheduled in parallel by the scheduling center cluster, and then the operation is executed by adopting the multithreading parallel and dynamic proxy modes by combining the operation priority level, so as to realize the multithreading parallel processing operation, and the performance when the operation is executed is improved according to the set operation scheduling strategy.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a job scheduling method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a job scheduling method according to an embodiment of the present invention;
fig. 3 is a sub-flow diagram of a job scheduling method according to an embodiment of the present invention;
fig. 4 is a sub-flow diagram of a job scheduling method according to an embodiment of the present invention;
fig. 5 is a sub-flow diagram of a job scheduling method according to an embodiment of the present invention;
fig. 6 is a schematic block diagram of a job scheduling apparatus according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of a demand determining unit of the job scheduling apparatus provided by the embodiment of the present invention;
fig. 8 is a schematic block diagram of a priority level determining unit of a job scheduling apparatus according to an embodiment of the present invention;
fig. 9 is a schematic block diagram of an execution unit of a job scheduling apparatus according to an embodiment of the present invention;
FIG. 10 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of a job scheduling method according to an embodiment of the present invention. Fig. 2 is a schematic flowchart of a job scheduling method according to an embodiment of the present invention. The job scheduling method is applied to the server. The server performs data interaction with a scheduling center cluster and a service cluster, determines a job scheduling strategy and a priority level of a job to be executed by constructing different job scheduling strategies and thread pools capable of executing the job in parallel, schedules the service cluster by the scheduling center cluster, executes the job by combining the priority level and a parallel processing mode, does not adopt a reflection calling method when executing the job, and performs dynamic proxy execution according to preset parameters.
Fig. 2 is a flowchart illustrating a job scheduling method according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S170.
And S110, constructing job scheduling strategies with different business requirements and thread pools for processing jobs in parallel.
In this embodiment, the job scheduling policy includes job identification numbers corresponding to different service requirements, scheduling center ID numbers corresponding to different service requirements, and execution sequences of jobs with different priority levels.
A thread pool refers to a collection of threads.
In this embodiment, the job is executed at a fixed time, and the job can be processed in parallel at a fixed time by introducing a klinx-jobstarter component into a pom file of the business service, and by adding a bootstrap xml configuration file of the business service, and a related configuration of a fixed time task module.
The operation refers to a timing task, the service is internally provided with an operation processor which actually executes task logic, the scheduling center cluster comprises a plurality of scheduling centers, namely operation schedulers, a scheduling strategy is appointed, the appointed operation processor is called to execute corresponding operation, and the plurality of operation schedulers can call the same operation processor, for example, call the same function, but have different task parameters; each job has a unique identifier, the service is self-defined, the global uniqueness is required to be ensured (the service prefixes are used for splicing as much as possible), and the same JobKey can be associated to the same job.
If the operation is already created, the service starts and re-creates the operation with the same identification number, the default strategy can carry out coverage updating, for example, the cron which previously created a task is once every 5 seconds, the code is changed into 10 seconds at present, the service is restarted, and the previous task is triggered to be updated into 10 seconds; and acquiring information such as parameters of the currently executed task through the job processing context.
And after the business services are packaged in the same way through the core flow, the timing service realizes the operation corresponding to the interface and other third-party interfaces, the execution flow of the operation is that the priority level and the operation scheduling strategy are determined by the server and fed back to the operation scheduling strategy and the operation to the scheduling center cluster, and the scheduling center cluster schedules the business service cluster to perform parallel processing operation. The business service comprises eam service clusters and dc service clusters, and service nodes in the business service can be expanded according to actual conditions. And expanding the capacity of the dispatching center in the dispatching center cluster, dividing according to the tenants, routing to different dispatching center clusters, and dispersing the total amount of the operation.
And S120, acquiring a to-be-executed job set.
In this embodiment, the set of jobs to be executed refers to a set of jobs that need to be executed, and may be one job or a plurality of jobs.
S130, determining the service requirement corresponding to each job in the job set to be executed so as to obtain the target service requirement corresponding to each job.
In this embodiment, the target business requirement refers to a business requirement corresponding to each job.
In an embodiment, referring to fig. 3, the step S130 may include steps S131 to S132.
S131, acquiring an identification number of each job in the execution job set;
s132, determining the service requirement corresponding to the operation according to the identification number to obtain the target service requirement corresponding to each operation.
Different operations can be constructed according to different service requirements and are distinguished according to the operation identification numbers, one operation has one operation identification number, and the different service requirements are determined by the front three fields of the operation identification numbers, for example, the front three fields of the operation identification numbers of all the operations of the timing service requirements are JSJ, the fields corresponding to the specific different service requirements can be set according to actual conditions, and a specific mapping table is formed after the setting, so that the service requirements corresponding to the operations can be conveniently inquired.
According to the business requirements, more job scheduling strategies are added, and the capacity expansion of the scheduling center cluster and the business service cluster is combined, so that the large-batch jobs with different business requirements can be processed, the efficiency is high, and the applicability is strong.
And S140, determining a job scheduling strategy corresponding to each job according to the target service requirement corresponding to each job.
S150, sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result.
In this embodiment, the scheduling result includes a service ID number in the service cluster corresponding to the execution job.
Specifically, the job scheduling strategy corresponding to each job and the corresponding job are sent to a scheduling center cluster, the scheduling center cluster splits a table in a job database formed by the jobs by adopting a database splitting and table dividing technology to form a plurality of fragments, and the fragments are sent to corresponding scheduling centers according to numbers of the fragments to execute scheduling tasks in parallel to obtain scheduling results.
When the scheduling center executes the service, scheduling needs to be performed according to the content of the annotation scheduling, and the embodiment accesses the spel expression, so that the annotation scheduling can configure dynamic parameters, and the dynamic parameters are set according to different operation conditions, so as to adapt to emergency situations.
In this embodiment, by accessing the shading sphere, the job table of the database of the scheduling center can be split, so that a plurality of scheduling centers schedule tasks in parallel according to the segments, and the work efficiency of the scheduling center is improved.
And S160, determining the priority level of each job in the job set to be executed.
In this embodiment, the priority level refers to the execution priority and the quality level corresponding to each job.
In an embodiment, referring to fig. 4, the step S160 may include steps S161 to S162.
And S161, dividing the jobs in the job set to be executed into job sets with different levels according to the priority level of the service demand.
In this embodiment, mapping tables of priority levels corresponding to different service requirements are pre-configured, and the set of jobs to be executed is divided into sets of jobs of different levels according to the mapping tables in combination with the target service requirement corresponding to each job obtained in the foregoing S130, where the level of the set of jobs is determined by the corresponding service requirement.
And S162, sorting the jobs in the order of the priority level of each job in the job set of each level from high to low.
After the grades of the job sets are determined, the jobs are sorted according to the specific priority grade of each job, the execution sequence of the jobs is determined by adopting double priority grades, a large number of jobs can be executed more orderly, whether the scheduling fails and the jobs need to be redone after the execution fails or not after a period of time is judged according to the priority grades, and the rule can be set according to the actual situation.
And S170, starting threads in the space in the thread pool, and executing corresponding jobs in a dynamic proxy mode according to the scheduling result and the priority level of each job.
In an embodiment, referring to fig. 5, the step S170 may include steps S171 to S172.
S171, obtaining idle threads in the thread pool to obtain available threads.
In this embodiment, an available thread refers to a thread in an idle state in a thread pool, and the state is set to idle each time the thread runs out of a job; the thread executing the operation is set to be in a non-idle state, so that the thread capable of being used for executing the operation can be quickly confirmed, and the operation processing efficiency is improved by adopting a multi-thread simultaneous parallel processing operation mode.
For the job with larger processing amount, a mode of parallel processing of a plurality of threads can be adopted to shorten the time spent by the whole job process.
And S172, dynamically calling corresponding business services according to the job scheduling strategy and scheduling result corresponding to each job and the grade of each job, and executing the corresponding jobs in parallel by the business services in a dynamic proxy mode.
In this embodiment, the service is embedded into the original spring service of the service, or a long connection is opened to receive the notification, so as to remove the redundant http server, avoid the need of maintaining one more port for service configuration, and ensure the http concurrency performance and real-time performance. And the execution process of the operation and the business service share one registration center.
The operation scheduling strategy determines the execution sequence of operations with different priority levels, the scheduling result determines the ID of the executed service, the corresponding service can be called to execute the corresponding operation by combining the priority levels of the operations, the dynamic proxy mode comprises java.
According to the job scheduling method, a plurality of job scheduling strategies and a thread pool capable of processing jobs in parallel are constructed, a job scheduling strategy is determined by adopting service requirements for a set of jobs to be executed, tasks are clustered and divided by a scheduling center in parallel, and the jobs are executed in a multi-thread parallel and dynamic agent mode by combining job priority levels, so that the multi-thread parallel processing jobs are realized, and the performance of executing the jobs is improved according to the set job scheduling strategies.
Fig. 6 is a schematic block diagram of a job scheduling apparatus 300 according to an embodiment of the present invention. As shown in fig. 6, the present invention also provides a job scheduling apparatus 300 corresponding to the above job scheduling method. The job scheduling apparatus 300 includes a unit for executing the job scheduling method described above, and the apparatus may be configured in a server. Specifically, referring to fig. 6, the job scheduling apparatus 300 includes a construction unit 301, a collection acquisition unit 302, a requirement determination unit 303, a policy determination unit 304, a transmission unit 305, a priority level determination unit 306, and an execution unit 307.
The construction unit 301 is configured to construct job scheduling policies for different service requirements and a thread pool for parallel processing of jobs; a set acquisition unit 302, configured to acquire a set of jobs to be executed; a requirement determining unit 303, configured to determine a service requirement corresponding to each job in the job set to be executed, so as to obtain a target service requirement corresponding to each job; a policy determining unit 304, configured to determine a job scheduling policy corresponding to each job according to a target service requirement corresponding to each job; a sending unit 305, configured to send a job scheduling policy corresponding to each job and the corresponding job to a scheduling center cluster, and the scheduling center cluster schedules the tasks in parallel according to the segments to obtain a scheduling result; a priority level determining unit 306, configured to determine a priority level of each job in the set of jobs to be executed; and the execution unit 307 is configured to start a thread in a space in the thread pool, and execute a corresponding job according to the scheduling result and the priority level of each job.
In one embodiment, as shown in fig. 7, the requirement determining unit 303 includes an identification number obtaining sub-unit 3031 and a target requirement determining sub-unit 3032.
An identification number acquisition subunit 3031 configured to acquire an identification number of each job in the execution job set; and a target requirement determining subunit 3032, configured to determine, according to the identification number, a service requirement corresponding to the job, so as to obtain a target service requirement corresponding to each job.
In an embodiment, the sending unit 305 is configured to send the job scheduling policy corresponding to each job and the corresponding job to the scheduling center cluster, where the scheduling center cluster splits a table in a job database formed by the jobs by using a database-splitting and table-splitting technology to form a plurality of fragments, and sends the fragments to the corresponding scheduling center according to numbers of the fragments to execute a scheduling task in parallel, so as to obtain a scheduling result.
In one embodiment, as shown in fig. 8, the priority level determination unit 306 includes a dividing sub-unit 3061 and an ordering sub-unit 3062.
A dividing subunit 3061, configured to divide the jobs in the to-be-executed job set into job sets of different levels according to the priority level of the service demand; a sorting subunit 3062, configured to sort the jobs in the order of the priority level of each job in the job set of each level from high to low.
In one embodiment, as shown in FIG. 9, the execution unit 307 includes an available thread fetch subunit 3071 and a call subunit 3072.
An available thread acquiring subunit 3071, configured to acquire an idle thread in the thread pool to obtain an available thread; the invoking subunit 3072 is configured to invoke the corresponding service dynamically according to the job scheduling policy and the scheduling result corresponding to each job and the priority level of each job, and execute the corresponding job in parallel by the service and in a dynamic proxy manner.
It should be noted that, as can be clearly understood by those skilled in the art, the specific implementation processes of the job scheduling apparatus 300 and each unit may refer to the corresponding descriptions in the foregoing method embodiments, and for convenience and brevity of description, no further description is provided herein.
The job scheduling apparatus 300 described above may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 10.
Referring to fig. 10, fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, wherein the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 10, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer programs 5032 include program instructions that, when executed, cause the processor 502 to perform a job scheduling method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be caused to execute a job scheduling method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 10 is a block diagram of only a portion of the configuration relevant to the present teachings and is not intended to limit the computing device 500 to which the present teachings may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following steps:
constructing job scheduling strategies with different service requirements and thread pools for processing jobs in parallel; acquiring a set of jobs to be executed; determining a service requirement corresponding to each job in the job set to be executed so as to obtain a target service requirement corresponding to each job; determining a job scheduling strategy corresponding to each job according to the target service requirement corresponding to each job; sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result; determining the priority level of each job in the job set to be executed; and starting the thread of the space in the thread pool, and executing the corresponding operation in a dynamic proxy mode according to the scheduling result and the priority level of each operation.
The job scheduling strategy comprises job identification numbers corresponding to different service requirements, scheduling center ID numbers corresponding to different service requirements and execution sequences of jobs with different priority levels.
In an embodiment, when the step of determining the service requirement corresponding to each job in the job set to be executed to obtain the target service requirement corresponding to each job is implemented by the processor 502, the following steps are specifically implemented:
acquiring an identification number of each job in the execution job set; and determining the service requirement corresponding to the operation according to the identification number so as to obtain the target service requirement corresponding to each operation.
In an embodiment, when the step of sending the job scheduling policy corresponding to each job and the job corresponding to each job to the scheduling center cluster and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain the scheduling result is implemented by the processor 502, the following steps are specifically implemented:
and sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, wherein the scheduling center cluster divides a table in a job database formed by the jobs by adopting a database-splitting and table-splitting technology to form a plurality of fragments, and sends the fragments to the corresponding scheduling center according to the serial numbers of the fragments to execute scheduling tasks in parallel so as to obtain a scheduling result.
And the scheduling result comprises a business service ID number in the service cluster corresponding to the execution job.
In an embodiment, when the step of determining the priority level of each job in the set of jobs to be executed is implemented, the processor 502 specifically implements the following steps:
dividing the operation in the operation set to be executed into operation sets with different levels according to the priority level of the service requirement; the jobs in the job set of each level are sorted in order of their priority level from high to low.
In an embodiment, when the processor 502 implements the thread of the start thread pool space and executes the corresponding job step in a dynamic proxy manner according to the scheduling result and the priority level of each job, the following steps are specifically implemented:
acquiring idle threads in the thread pool to obtain available threads; and dynamically calling the corresponding business service according to the job scheduling strategy and scheduling result corresponding to each job and the priority level of each job, and executing the corresponding jobs in parallel by the business service in a dynamic proxy mode.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program, when executed by a processor, causes the processor to perform the steps of:
constructing job scheduling strategies with different service requirements and thread pools for processing jobs in parallel; acquiring a set of jobs to be executed; determining a service requirement corresponding to each job in the job set to be executed so as to obtain a target service requirement corresponding to each job; determining a job scheduling strategy corresponding to each job according to the target service requirement corresponding to each job; sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result; determining the priority level of each job in the job set to be executed; and starting the thread of the space in the thread pool, and executing the corresponding operation in a dynamic proxy mode according to the scheduling result and the priority level of each operation.
The job scheduling strategy comprises job identification numbers corresponding to different service requirements, scheduling center ID numbers corresponding to different service requirements and execution sequences of jobs with different priority levels.
In an embodiment, when the processor executes the computer program to implement the step of determining the service requirement corresponding to each job in the job set to be executed so as to obtain the target service requirement corresponding to each job, the following steps are specifically implemented:
acquiring an identification number of each job in the execution job set; and determining the service requirement corresponding to the operation according to the identification number so as to obtain the target service requirement corresponding to each operation.
In an embodiment, when the processor executes the computer program to implement the step of sending the job scheduling policy corresponding to each job and the corresponding job to the scheduling center cluster, and the scheduling center cluster schedules the tasks in parallel according to the slices to obtain the scheduling result, the following steps are specifically implemented:
and sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, wherein the scheduling center cluster divides a table in a job database formed by the jobs by adopting a database-splitting and table-splitting technology to form a plurality of fragments, and sends the fragments to the corresponding scheduling center according to the serial numbers of the fragments to execute scheduling tasks in parallel so as to obtain a scheduling result.
In an embodiment, when the step of determining the priority level of each job in the set of jobs to be executed is implemented by executing the computer program, the processor specifically implements the following steps:
dividing the operation in the operation set to be executed into operation sets with different levels according to the priority level of the service requirement; the jobs in the job set of each level are sorted in order of their priority level from high to low.
In an embodiment, when the processor executes the computer program to implement the thread for starting the space in the thread pool, and executes the corresponding job step in a dynamic proxy manner according to the scheduling result and the priority level of each job, the following steps are specifically implemented:
acquiring idle threads in the thread pool to obtain available threads; and dynamically calling the corresponding business service according to the job scheduling strategy and scheduling result corresponding to each job and the priority level of each job, and executing the corresponding jobs in parallel by the business service in a dynamic proxy mode.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The job scheduling method is characterized by comprising the following steps:
constructing job scheduling strategies with different service requirements and thread pools for processing jobs in parallel;
acquiring a set of jobs to be executed;
determining a service requirement corresponding to each job in the job set to be executed so as to obtain a target service requirement corresponding to each job;
determining a job scheduling strategy corresponding to each job according to the target service requirement corresponding to each job;
sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result;
determining the priority level of each job in the job set to be executed;
and starting the thread of the space in the thread pool, and executing the corresponding operation in a dynamic proxy mode according to the scheduling result and the priority level of each operation.
2. The job scheduling method according to claim 1, wherein the job scheduling policy includes job identification numbers corresponding to different service requirements, scheduling center ID numbers corresponding to different service requirements, and execution order of jobs with different priority levels.
3. The job scheduling method according to claim 1, wherein the determining the service requirement corresponding to each job in the job set to be executed to obtain the target service requirement corresponding to each job comprises:
acquiring an identification number of each job in the execution job set;
and determining the service requirement corresponding to the operation according to the identification number so as to obtain the target service requirement corresponding to each operation.
4. The job scheduling method according to claim 1, wherein the sending the job scheduling policy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the slices by the scheduling center cluster to obtain the scheduling result includes:
and sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, wherein the scheduling center cluster divides a table in a job database formed by the jobs by adopting a database-splitting and table-splitting technology to form a plurality of fragments, and sends the fragments to the corresponding scheduling center according to the serial numbers of the fragments to execute scheduling tasks in parallel so as to obtain a scheduling result.
5. The job scheduling method according to claim 1, wherein the determining the priority level of each job in the set of jobs to be executed comprises:
dividing the operation in the operation set to be executed into operation sets with different levels according to the priority level of the service requirement;
the jobs in the job set of each level are sorted in order of their priority level from high to low.
6. The job scheduling method according to claim 1, wherein the starting of the thread in the space in the thread pool and the execution of the corresponding job in a dynamic proxy manner according to the scheduling result and the priority of each job comprise:
acquiring idle threads in the thread pool to obtain available threads;
and dynamically calling the corresponding business service according to the job scheduling strategy and scheduling result corresponding to each job and the priority level of each job, and executing the corresponding jobs in parallel by the business service in a dynamic proxy mode.
7. The job scheduling method according to claim 4, wherein the scheduling result includes a business service ID number in the service cluster corresponding to the execution job.
8. A job scheduling apparatus comprising:
the construction unit is used for constructing job scheduling strategies with different service requirements and thread pools for processing jobs in parallel;
the system comprises a set acquisition unit, a task execution unit and a task execution unit, wherein the set acquisition unit is used for acquiring a set of to-be-executed jobs;
the demand determining unit is used for determining the business demand corresponding to each job in the job set to be executed so as to obtain the target business demand corresponding to each job;
the strategy determining unit is used for determining a job scheduling strategy corresponding to each job according to the target service requirement corresponding to each job;
the sending unit is used for sending the job scheduling strategy corresponding to each job and the corresponding job to a scheduling center cluster, and scheduling the tasks in parallel according to the fragments by the scheduling center cluster to obtain a scheduling result;
the priority level determining unit is used for determining the priority level of each job in the job set to be executed;
and the execution unit is used for starting the thread in the space in the thread pool and executing the corresponding job according to the scheduling result and the priority level of each job.
9. A computer device, characterized in that the computer device comprises a memory, on which a computer program is stored, and a processor, which when executing the computer program implements the method according to any of claims 1 to 7.
10. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202111281820.9A 2021-11-01 2021-11-01 Job scheduling method and device, computer equipment and storage medium Pending CN113986507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111281820.9A CN113986507A (en) 2021-11-01 2021-11-01 Job scheduling method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111281820.9A CN113986507A (en) 2021-11-01 2021-11-01 Job scheduling method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113986507A true CN113986507A (en) 2022-01-28

Family

ID=79745228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111281820.9A Pending CN113986507A (en) 2021-11-01 2021-11-01 Job scheduling method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113986507A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114610474A (en) * 2022-05-12 2022-06-10 之江实验室 Multi-strategy job scheduling method and system in heterogeneous supercomputing environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425462A (en) * 2012-05-14 2013-12-04 阿里巴巴集团控股有限公司 Method and device for workflow data persistence
US20150205634A1 (en) * 2014-01-17 2015-07-23 Red Hat, Inc. Resilient Scheduling of Broker Jobs for Asynchronous Tasks in a Multi-Tenant Platform-as-a-Service (PaaS) System
CN109814995A (en) * 2019-01-04 2019-05-28 深圳壹账通智能科技有限公司 Method for scheduling task, device, computer equipment and storage medium
CN110362390A (en) * 2019-06-06 2019-10-22 银江股份有限公司 A kind of distributed data integrated operations dispatching method and device
CN111427670A (en) * 2019-01-09 2020-07-17 北京京东尚科信息技术有限公司 Task scheduling method and system
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425462A (en) * 2012-05-14 2013-12-04 阿里巴巴集团控股有限公司 Method and device for workflow data persistence
US20150205634A1 (en) * 2014-01-17 2015-07-23 Red Hat, Inc. Resilient Scheduling of Broker Jobs for Asynchronous Tasks in a Multi-Tenant Platform-as-a-Service (PaaS) System
CN109814995A (en) * 2019-01-04 2019-05-28 深圳壹账通智能科技有限公司 Method for scheduling task, device, computer equipment and storage medium
CN111427670A (en) * 2019-01-09 2020-07-17 北京京东尚科信息技术有限公司 Task scheduling method and system
CN110362390A (en) * 2019-06-06 2019-10-22 银江股份有限公司 A kind of distributed data integrated operations dispatching method and device
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
丁振凡: "《Spring 3.x编程技术与应用》", 北京邮电大学出版社, pages: 166 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114610474A (en) * 2022-05-12 2022-06-10 之江实验室 Multi-strategy job scheduling method and system in heterogeneous supercomputing environment

Similar Documents

Publication Publication Date Title
US20180322162A1 (en) Query dispatch and execution architecture
US11422844B1 (en) Client-specified network interface configuration for serverless container management service
US7302686B2 (en) Task management system
US9063790B2 (en) System and method for performing distributed parallel processing tasks in a spot market
US11392422B1 (en) Service-managed containers for container orchestration service
CN111381972B (en) Distributed task scheduling method, device and system
US6421701B1 (en) Method and system for replication support in a remote method invocation system
EP1525529A2 (en) Method for dynamically allocating and managing resources in a computerized system having multiple consumers
CN108874549B (en) Resource multiplexing method, device, terminal and computer readable storage medium
CN110750339B (en) Thread scheduling method and device and electronic equipment
US8234643B2 (en) CRON time processing implementation for scheduling tasks within a multi-tiered enterprise network
CN109299052A (en) Log cutting method, device, computer equipment and storage medium
CN113986507A (en) Job scheduling method and device, computer equipment and storage medium
US9158601B2 (en) Multithreaded event handling using partitioned event de-multiplexers
CN111045825A (en) Batch processing performance optimization method and device, computer equipment and storage medium
CN114691321A (en) Task scheduling method, device, equipment and storage medium
US20120124339A1 (en) Processor core selection based at least in part upon at least one inter-dependency
US6990608B2 (en) Method for handling node failures and reloads in a fault tolerant clustered database supporting transaction registration and fault-in logic
CN110851166A (en) User-unaware application program updating method and device and computer equipment
CN111880910A (en) Data processing method and device, server and storage medium
US10990385B1 (en) Streaming configuration management
CN116257333A (en) Distributed task scheduling method, device and system
CN110851245A (en) Distributed asynchronous task scheduling method and electronic equipment
CN111767122A (en) Distributed task scheduling management method and device
WO2004090660A2 (en) Controlling usage of system resources by a network manager

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220128