CN114528082A - Task scheduling method and device, electronic equipment and storage medium - Google Patents

Task scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114528082A
CN114528082A CN202210135184.7A CN202210135184A CN114528082A CN 114528082 A CN114528082 A CN 114528082A CN 202210135184 A CN202210135184 A CN 202210135184A CN 114528082 A CN114528082 A CN 114528082A
Authority
CN
China
Prior art keywords
task
trigger
target
scheduler
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210135184.7A
Other languages
Chinese (zh)
Inventor
田蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202210135184.7A priority Critical patent/CN114528082A/en
Publication of CN114528082A publication Critical patent/CN114528082A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a task scheduling method, a device, electronic equipment and a storage medium, which relate to the technical field of computers, wherein the task scheduling method is applied to a task scheduling system based on Quartz, the task scheduling system comprises a plurality of schedulers, each scheduler manages at least one trigger, and the method comprises the following steps: when the scheduler acquires a target trigger to be triggered from at least one trigger managed by the scheduler, generating a corresponding task according to a trigger rule in the target trigger; and writing the corresponding task into a cache queue of the target task execution client to complete the triggering of the target trigger. The scheme can improve the number of concurrent tasks by adding a scheduler and decoupling task generation from task execution.

Description

Task scheduling method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task scheduling method and apparatus, an electronic device, and a storage medium.
Background
In the related art, a Quartz framework is usually used to implement the task scheduling function, and Quartz is a task scheduling framework widely used at present, and is not only powerful in function but also flexible in configuration.
However, the scheduling container corresponding to the Quartz cluster depends on the database row lock, and a performance bottleneck exists. When the number of tasks to be triggered at the same time reaches a certain level, the problem of task triggering delay occurs.
Disclosure of Invention
The application provides a task scheduling method and device, electronic equipment and a storage medium.
According to a first aspect of the present application, a task scheduling method is provided, where the method is applied to a Quartz-based task scheduling system, where the task scheduling system includes a plurality of schedulers, and each scheduler manages at least one trigger, and the method includes:
when the scheduler acquires a target trigger to be triggered from at least one trigger managed by the scheduler, generating a corresponding task according to a trigger rule in the target trigger;
and writing the corresponding task into a cache queue of a target task execution client to complete the triggering of the target trigger.
In some embodiments of the present application, the task includes identification and specific content; the writing the corresponding task into a cache queue of a target task execution client includes:
writing the specific content and the identification of the corresponding task into a database;
and writing the corresponding task identifier into a cache queue of the target task execution client.
In some embodiments of the present application, the method further comprises:
when a task reading request of the target task execution client is received, acquiring an identifier of a task to be executed, wherein the identifier is carried in the task reading request; the identification of the task to be executed is obtained by the target task execution client from the corresponding cache queue;
reading the specific content of the task to be executed from the database according to the identifier of the task to be executed;
and sending the read specific content of the task to be executed to the target task execution client for execution.
Further, in some embodiments of the present application, the method further comprises:
and responding to the received trigger adding request, generating a trigger to be added according to the trigger adding request, and matching a target scheduler for managing the trigger to be added.
Wherein the matching is used for managing a target scheduler of the pending trigger, and comprises:
acquiring the name of the trigger to be added;
and carrying out hash according to the name, and determining a target scheduler matched with the trigger to be added in the plurality of schedulers through a consistency algorithm.
According to a second aspect of the present application, there is provided a task scheduling apparatus, which is applied to a Quartz-based task scheduling system, where the task scheduling system includes a plurality of schedulers, and each scheduler manages at least one trigger, the apparatus including:
the generation module is used for generating a corresponding task according to a trigger rule in a target trigger when the scheduler acquires the target trigger to be triggered from at least one trigger managed by the scheduler;
and the writing module is used for writing the corresponding task into a cache queue of a target task execution client to complete the triggering of the target trigger.
In some embodiments of the present application, the task includes an identifier and specific content, and the writing module is specifically configured to:
writing the specific content and the identification of the corresponding task into a database;
and writing the corresponding task identifier into a cache queue of the target task execution client.
In some embodiments of the present application, the apparatus further comprises:
the acquisition module is used for acquiring the identifier of the task to be executed carried in the task reading request when the task reading request of the target task execution client is received; the identification of the task to be executed is obtained by the target task execution client from the corresponding cache queue;
the reading module is used for reading the specific content of the task to be executed from the database according to the identifier of the task to be executed;
and the sending module is used for sending the read specific content of the task to be executed to the target task execution client for execution.
Further, in some embodiments of the present application, the apparatus further comprises:
and the adding module is used for responding to the received trigger adding request, generating a trigger to be added according to the trigger adding request and matching a target scheduler for managing the trigger to be added.
Wherein the newly added module is specifically configured to:
acquiring the name of the trigger to be added;
and carrying out hash according to the name, and determining a target scheduler matched with the trigger to be added in the plurality of schedulers through a consistency algorithm.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect described above.
According to the technical scheme of the application, the task scheduling system comprises a plurality of schedulers, and each scheduler manages at least one trigger, so that the number of row locks can be increased by increasing the number of the schedulers, the performance of the database can be fully utilized, and the purpose of increasing the number of concurrent tasks is achieved. In addition, when the scheduler acquires the target trigger to be triggered from at least one trigger managed by the scheduler, the generated task is written into the cache queue of the target task execution client, so that the target task execution client executes the corresponding task, and the task generation and task execution are decoupled, so that the delay of task triggering caused by the occupation of threads in a thread pool in the task execution process can be avoided, and the number of concurrent tasks can be further increased.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart of a task scheduling method according to an embodiment of the present application;
fig. 2 is a flowchart of another task scheduling method provided in an embodiment of the present application;
FIG. 3 is a flowchart of an embodiment of a newly added trigger;
fig. 4 is a block diagram illustrating a task scheduling apparatus according to an embodiment of the present disclosure;
fig. 5 is a block diagram of another task scheduling device according to an embodiment of the present disclosure;
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
User data involved in some embodiments of the present application is authorized, obtained, processed, transmitted, etc. to comply with legal requirements.
It should be noted that Quartz is a framework for scheduling an open source task completely written by Java, and sets a job timing running rule through a trigger to control the running time of a job. The Quartz cluster can bring high availability and flexibility to the scheduler through the functions of failover and load balancing. The method is mainly used for executing timing tasks, such as: sending information at regular time, generating report at regular time and the like. In the related art, a Quartz framework is generally used to implement the function of task scheduling.
However, the scheduling container corresponding to the Quartz cluster depends on the database row lock, and a performance bottleneck exists. When the number of tasks to be triggered at the same time reaches a certain level, the problem of task triggering delay occurs. Through testing, the delay occurs when the concurrency of the Quartz single-cluster trigger task exceeds 100 tasks/s.
In order to solve the above problems, the present application provides a task scheduling method, a task scheduling apparatus, an electronic device, and a storage medium.
Fig. 1 is a flowchart of a task scheduling method according to an embodiment of the present application. It should be noted that the task scheduling method in the embodiment of the present application can be applied to a task scheduling device in the embodiment of the present application, and the device can be configured in an electronic device. The task scheduling method in the embodiment of the application can be applied to a Quartz-based task scheduling system, wherein the task scheduling system comprises a plurality of schedulers, and each scheduler manages at least one trigger. The method may comprise the steps of:
step 101, when the scheduler acquires a target trigger to be triggered from at least one trigger managed by the scheduler, generating a corresponding task according to a trigger rule in the target trigger.
It should be noted that the task scheduling system in the embodiment of the present application is based on a Quartz framework, and when the system is started, a plurality of schedulers are initialized, so that the number of row locks of the database can be increased by increasing the schedulers, and thus, the performance of the database can be fully utilized, and the purpose of increasing the number of concurrent tasks per second is achieved. Each scheduler manages at least one trigger, and acquires the trigger time of the trigger managed by the scheduler and the trigger reaching the trigger time by time triggering through a sweep list.
The inventor of the present application finds, through experiments, that since increasing the number of the scheduler increases the number of the database row locks, and the increase of the number of the database row locks also consumes the performance of the database, increasing the number of the triggers within a certain range can improve the scheduling performance, and after exceeding the range, increasing the number of the triggers will not improve the scheduling performance any more. In some embodiments of the present application, the number of schedulers in the task scheduling system may be determined based on experimental results based on actual conditions.
In some embodiments of the present application, each time a trigger is added, the system matches a scheduler for managing the trigger, and stores the matching relationship in the database. In this way, when each scheduler scans the table to obtain the trigger, only the trigger managed by the scheduler can be obtained according to the matching relationship. The method for matching the scheduler for the trigger may be random matching, and the scheduler matching the trigger may be determined in the plurality of triggers according to a preset rule, which is not limited in the present application.
In some embodiments of the present application, the target trigger to be triggered refers to a scheduler that has reached a trigger time. That is, each scheduler may acquire a target trigger that has reached a trigger time from at least one trigger managed by itself to trigger the target trigger, and generate a corresponding task. The implementation mode can comprise: each scheduler acquires a scheduler which is about to reach the trigger time from the triggers managed by the scheduler through regularly scanning the table, for example, the trigger which reaches the trigger time within 30s can be acquired when the table is scanned each time; each scheduler monitors triggers which are managed by the scheduler and are about to reach the triggering time, and acquires target triggers to be triggered after a database is locked; generating a corresponding task according to the trigger rule of the target trigger; and writing the generated task into the corresponding position of the database.
And 102, writing the corresponding task into a cache queue of the target task execution client to complete the triggering of the target trigger.
In the related technology, the execution process of the task occupies the working thread in the working thread pool, so that the scheduler needs to judge whether an idle thread exists in the working thread pool or not when triggering the task, acquire a database row lock to lock the database when the idle thread exists, and then trigger the trigger which reaches the triggering time; if more generated tasks are available and no idle thread exists in the working thread pool, the trigger needs to wait until the idle thread exists in the working thread pool and then trigger the tasks; that is, the task execution occupies the thread pool, which may cause delayed triggering of the task, and thus is not favorable for task scheduling of the system.
In order to solve the above problem, the task scheduling method in the embodiment of the present application decouples task generation from task execution, that is, separates task generation from task execution, so that the task execution no longer affects the trigger of the scheduler on the trigger. In the embodiment of the application, the generated task is written into the cache queue of the corresponding task execution client, the task execution client executes the task, and the task is not executed by using the working thread of the thread pool in the scheduler container, so that occupation of the working thread can be reduced, waiting of the scheduler when the task is triggered due to occupation of the working thread can be avoided, task scheduling efficiency can be improved, and task concurrency can be increased.
In some embodiments of the present application, since the traffic classifications are different, the task of each traffic may be performed by the task execution client of the corresponding traffic. The task execution client can be understood as a system or a terminal device having the capability of executing the task. The target task execution client refers to a task execution client corresponding to the service class to which the generated task belongs. As an example, the implementation manner of writing the corresponding task into the cache queue of the target task execution client may be: when the trigger is created, the corresponding service identifier can be associated according to the service of the corresponding task, and the association relation is stored in a database; acquiring a service identifier associated with a target trigger to be triggered, and determining a target task execution client according to the service identifier; and writing the corresponding task into a cache queue of the target task execution client.
It should be noted that after the task is written into the cache queue of the target task execution client, the triggering of the target trigger is completed, so that the database row lock corresponding to the scheduler is released to trigger the next trigger.
According to the task scheduling method of the embodiment of the application, as the task scheduling system comprises a plurality of schedulers and each scheduler manages at least one trigger, the number of row locks can be increased by increasing the number of the schedulers, so that the performance of the database can be fully utilized, and the purpose of increasing the number of concurrent tasks is achieved. In addition, when the scheduler acquires the target trigger to be triggered from at least one trigger managed by the scheduler, the generated task is written into the cache queue of the target task execution client, so that the target task execution client executes the corresponding task, and the decoupling of task generation and task execution is realized, so that the delay of task triggering caused by the occupation of threads in a thread pool in the task execution process can be avoided, and the number of concurrent tasks can be further increased.
In order to reduce the time consumption of task triggering and save the occupied space of the buffer queue, the present application provides another embodiment.
Fig. 2 is a flowchart of another task scheduling method according to an embodiment of the present application. As shown in fig. 2, on the basis of the above embodiment, the method may include the steps of:
step 201, when the scheduler acquires a target trigger to be triggered from at least one trigger managed by the scheduler, generating a corresponding task according to a trigger rule in the target trigger.
It should be noted that, in the embodiment of the present application, the task generated according to the trigger rule in the target trigger includes identification and specific content. The task identifier may refer to a task number, and the like, and the task identifier and the specific content of the task are in a one-to-one relationship. The specific content of the task may include the class of the execution task, the name and the state of the corresponding target trigger, and other related information.
Step 202, writing the concrete content and the identification of the corresponding task into a database.
Step 203, writing the corresponding task identifier into the cache queue of the target task execution client, and completing the triggering of the target trigger.
That is, only the task identifier is written into the buffer queue of the corresponding target task execution client, so that not only the occupation of the buffer queue but also the occupation of the time for writing the task can be reduced.
In some embodiments of the present application, after the task execution client obtains the identifier of the task in the cache queue, the task execution client may obtain the specific content of the corresponding task by calling the task reading interface, so as to execute the corresponding task. Therefore, the task scheduling method of the embodiment of the present application may further include the following steps:
step 204, when receiving a task reading request of a target task execution client, acquiring an identifier of a task to be executed, wherein the identifier is carried in the task reading request; and the identifier of the task to be executed is obtained by the target task execution client from the corresponding cache queue.
It can be understood that the target task execution client may obtain the identifier of the task to be executed from the corresponding cache queue, and in order to obtain the specific content of the task to be executed, a task reading request may be sent to the task scheduling system through the retrieval task reading interface, and the reading request carries the identifier of the task to be executed.
And step 205, reading the specific content of the task to be executed from the database according to the identifier of the task to be executed.
Because the database stores the identifier and the specific content of the task to be executed, the task information corresponding to the identifier can be searched in the database according to the identifier of the task to be executed, so that the specific content of the task to be executed can be read in the task information.
And step 206, sending the read specific content of the task to be executed to the target task execution client for execution.
That is, after the target task execution client receives the specific content of the task to be executed, the target task execution client may execute the corresponding task according to the specific content of the task to be executed.
In other embodiments of the present application, the target task execution client may call through the interface and feed back execution information of the task to be executed to the task scheduling system, for example, the target task execution client may send relevant information such as time and state of task execution to the task scheduling system through the interface, so that the task scheduling system may update task information stored in the database.
According to the task scheduling method, only the identification of the task is written into the cache queue of the target task execution client, and when a task reading request of the target task execution client is received, the specific content of the task to be executed is sent to the target task execution client according to the identification of the task to be executed carried in the reading request, so that the target task execution client executes the corresponding task. Therefore, on one hand, the space occupation of the buffer queue can be reduced, and on the other hand, the time consumption in the task writing process can also be reduced, so that the time of the task triggering process can be reduced, and the task concurrency of the system can be improved.
The task scheduling method of the present application further includes a process of adding a trigger, which will be described next.
Fig. 3 is a flowchart of a new trigger in the embodiment of the present application. As shown in fig. 3, based on the above embodiment, the process may include the following steps:
step 301, in response to receiving a trigger adding request, generating a to-be-added trigger according to the trigger adding request, and matching a target scheduler for managing the to-be-added trigger.
In some embodiments of the present application, a user may send a trigger addition request through an interactive page using a terminal for adding a trigger. The trigger adding request comprises trigger rules of the trigger to be added, corresponding task execution classes and other related information, so that the trigger to be added can be generated according to the trigger adding request. Because the task scheduling system comprises a plurality of schedulers, a corresponding target scheduler needs to be matched for the trigger to be newly added. As an example, the target scheduler may be matched according to the name of the trigger to be newly added, or the target scheduler may be matched in a random manner, or the target scheduler may be matched in another preset manner, which is not limited in this application.
Next, taking an example of matching the target scheduler according to the name of the trigger to be added as an example, an implementation manner of the target scheduler will be described, which includes the following steps:
and 301-1, acquiring the name of the trigger to be added.
In some embodiments of the present application, the name of the to-be-added trigger may be carried in the trigger addition request, or may be generated when the to-be-added trigger is generated, and the name of the to-be-added trigger is unique. That is, if the name of the to-be-added trigger can be carried in the trigger addition request, the name of the to-be-added trigger is obtained in the trigger addition request in a manner of obtaining the name of the to-be-added trigger carried in the trigger addition request; if the name of the trigger to be added is generated by the system, the name of the generated trigger to be added can be directly used here.
And step 301-2, hashing is carried out according to the name, and a target scheduler matched with the trigger to be added is determined in the schedulers through a consistency algorithm.
In order to balance the number of triggers managed by each scheduler and improve the task scheduling efficiency, the embodiment of the application adopts a consistent hash algorithm to determine a target scheduler matched with the trigger to be added. It should be noted that, the relationship between the service request and the processing server is usually mapped by using a consistent hashing algorithm, and the consistency rowThe Hash algorithm carries out Hash calculation on the data processing device, and maps the whole Hash value space into a virtual ring, and the value range of the whole Hash space is 0-2321, the whole space is organized in a clockwise direction, the service request calculates a corresponding hash value by using a hash algorithm, and then the service request is searched clockwise along a circular ring according to the position of the hash value, and the corresponding first server is a processing server of the request, so that the expandability is strong, and the load balancing capability is strong.
In some embodiments of the present application, the name of each scheduler may be used for hash calculation, and the obtained hash modulus value is mapped into a hash ring; and calculating the corresponding hash module value by adopting the same hash algorithm for the name of the trigger to be added, wherein in the corresponding hash ring, the first scheduler which is found clockwise along the ring is the target scheduler matched with the trigger to be added.
According to the task scheduling method of the embodiment of the application, when a trigger adding request is received, a trigger to be added can be generated according to the trigger adding request, and a target scheduler for managing the trigger to be added is matched, so that a scheduler matched with the trigger to be added is selected from a plurality of schedulers, and management of the trigger to be added is achieved. In addition, the target schedulers of the triggers to be added are obtained by using the consistent hash algorithm to balance management of the schedulers, so that the tasks can be triggered timely, and further the task concurrency of the task scheduling system can be further improved.
In order to implement the above embodiments, the present application provides a task scheduling device.
Fig. 4 is a block diagram of a task scheduling device according to an embodiment of the present application. The device can be used for a Quartz-based task scheduling system, wherein the task scheduling system comprises a plurality of schedulers, and each scheduler manages at least one trigger respectively. As shown in fig. 4, the apparatus may include:
a generating module 401, configured to generate a corresponding task according to a trigger rule in a target trigger when the scheduler acquires the target trigger to be triggered from at least one trigger managed by the scheduler;
a writing module 402, configured to write the corresponding task into a cache queue of a target task execution client, and complete triggering of the target trigger.
In some embodiments of the present application, the task includes an identifier and specific content, and the writing module is specifically configured to:
writing the specific content and the identification of the corresponding task into a database;
and writing the corresponding task identifier into a cache queue of the target task execution client.
In some embodiments of the present application, the apparatus may further comprise:
an obtaining module 403, configured to obtain, when receiving a task reading request of the target task execution client, an identifier of a task to be executed, where the identifier is carried in the task reading request; the identification of the task to be executed is obtained by the target task execution client from the corresponding cache queue;
a reading module 404, configured to read specific content of the task to be executed from the database according to the identifier of the task to be executed;
a sending module 405, configured to send the read specific content of the task to be executed to the target task execution client for execution.
According to the task scheduling device of the embodiment of the application, as the task scheduling system comprises a plurality of schedulers and each scheduler manages at least one trigger, the number of row locks can be increased by increasing the number of the schedulers, so that the performance of the database can be fully utilized, and the purpose of increasing the number of concurrent tasks is achieved. In addition, when the scheduler acquires the target trigger to be triggered from at least one trigger managed by the scheduler, the generated task is written into the cache queue of the target task execution client, so that the target task execution client executes the corresponding task, and the decoupling of task generation and task execution is realized, so that the delay of task triggering caused by the occupation of threads in a thread pool in the task execution process can be avoided, and the number of concurrent tasks can be further increased.
Fig. 5 is a block diagram of another task scheduling device according to an embodiment of the present disclosure. As shown in fig. 5, on the basis of the above embodiment, the apparatus may further include:
and a newly adding module 506, configured to generate a to-be-added trigger according to a trigger newly adding request in response to receiving the trigger newly adding request, and match a target scheduler for managing the to-be-added trigger.
Wherein the adding module 506 is specifically configured to:
acquiring the name of the trigger to be added;
and carrying out hash according to the name, and determining a target scheduler matched with the trigger to be added in the plurality of schedulers through a consistency algorithm.
According to the task scheduling device of the embodiment of the application, when a trigger adding request is received, a trigger to be added can be generated according to the trigger adding request, and a target scheduler for managing the trigger to be added is matched, so that a scheduler matched with the trigger to be added is selected from a plurality of schedulers, and management of the trigger to be added is achieved. In addition, the target schedulers of the triggers to be added are obtained by using the consistent hash algorithm to balance management of the schedulers, so that the tasks can be triggered timely, and further the task concurrency of the task scheduling system can be further improved.
Based on the embodiment of the application, the application also provides an electronic device, at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the foregoing task scheduling methods.
Based on the embodiments of the present application, there is also provided a non-transitory computer readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer to execute the task scheduling method according to any one of the foregoing methods provided by the embodiments of the present application.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as the task scheduling method. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the task scheduling method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the task scheduling method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A task scheduling method is applied to a Quartz-based task scheduling system, wherein the task scheduling system comprises a plurality of schedulers, and each scheduler manages at least one trigger, and the method comprises the following steps:
when the scheduler acquires a target trigger to be triggered from at least one trigger managed by the scheduler, generating a corresponding task according to a trigger rule in the target trigger;
and writing the corresponding task into a cache queue of a target task execution client to complete the triggering of the target trigger.
2. The method of claim 1, wherein the task includes identification and specific content; the writing the corresponding task into a cache queue of a target task execution client includes:
writing the specific content and the identification of the corresponding task into a database;
and writing the corresponding task identifier into a cache queue of the target task execution client.
3. The method of claim 2, further comprising:
when a task reading request of the target task execution client is received, acquiring an identifier of a task to be executed, wherein the identifier is carried in the task reading request; the identification of the task to be executed is obtained by the target task execution client from the corresponding cache queue;
reading the specific content of the task to be executed from the database according to the identifier of the task to be executed;
and sending the read specific content of the task to be executed to the target task execution client for execution.
4. The method of claim 1, further comprising:
and responding to the received trigger adding request, generating a trigger to be added according to the trigger adding request, and matching a target scheduler for managing the trigger to be added.
5. The method of claim 4, wherein matching a target scheduler for managing the pending trigger comprises:
acquiring the name of the trigger to be added;
and carrying out hash according to the name, and determining a target scheduler matched with the trigger to be added in the plurality of schedulers through a consistency algorithm.
6. A task scheduling apparatus, wherein the apparatus is applied to a Quartz-based task scheduling system, the task scheduling system includes a plurality of schedulers, each scheduler manages at least one trigger, and the apparatus includes:
the generation module is used for generating a corresponding task according to a trigger rule in a target trigger when the scheduler acquires the target trigger to be triggered from at least one trigger managed by the scheduler;
and the writing module is used for writing the corresponding task into a cache queue of a target task execution client to complete the triggering of the target trigger.
7. The apparatus of claim 6, wherein the task comprises an identification and a specific content; the write module is specifically configured to:
writing the specific content and the identification of the corresponding task into a database;
and writing the corresponding task identifier into a cache queue of the target task execution client.
8. The apparatus of claim 7, further comprising:
the acquisition module is used for acquiring the identifier of the task to be executed carried in the task reading request when the task reading request of the target task execution client is received; the identification of the task to be executed is obtained by the target task execution client from the corresponding cache queue;
the reading module is used for reading the specific content of the task to be executed from the database according to the identification of the task to be executed;
and the sending module is used for sending the read specific content of the task to be executed to the target task execution client for execution.
9. The apparatus of claim 6, further comprising:
and the adding module is used for responding to the received trigger adding request, generating a trigger to be added according to the trigger adding request and matching a target scheduler for managing the trigger to be added.
10. The apparatus of claim 9, wherein the add module is specifically configured to:
acquiring the name of the trigger to be added;
and carrying out hash according to the name, and determining a target scheduler matched with the trigger to be added in the plurality of schedulers through a consistency algorithm.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 5.
CN202210135184.7A 2022-02-14 2022-02-14 Task scheduling method and device, electronic equipment and storage medium Pending CN114528082A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210135184.7A CN114528082A (en) 2022-02-14 2022-02-14 Task scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210135184.7A CN114528082A (en) 2022-02-14 2022-02-14 Task scheduling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114528082A true CN114528082A (en) 2022-05-24

Family

ID=81623146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210135184.7A Pending CN114528082A (en) 2022-02-14 2022-02-14 Task scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114528082A (en)

Similar Documents

Publication Publication Date Title
US20230020324A1 (en) Task Processing Method and Device, and Electronic Device
CN112967023B (en) Method, device, equipment, storage medium and program product for acquiring schedule information
CN114936173B (en) Read-write method, device, equipment and storage medium of eMMC device
CN111488492A (en) Method and apparatus for retrieving graph database
US10318456B2 (en) Validation of correctness of interrupt triggers and delivery
CN112905314A (en) Asynchronous processing method and device, electronic equipment, storage medium and road side equipment
CN113010535B (en) Cache data updating method, device, equipment and storage medium
CN112671892B (en) Data transmission method, device, electronic equipment and medium
WO2024109068A1 (en) Program monitoring method and apparatus, and electronic device and storage medium
CN112817992A (en) Method, device, electronic equipment and readable storage medium for executing change task
US20200371827A1 (en) Method, Apparatus, Device and Medium for Processing Data
CN116243983A (en) Processor, integrated circuit chip, instruction processing method, electronic device, and medium
CN116185578A (en) Scheduling method of computing task and executing method of computing task
CN114528082A (en) Task scheduling method and device, electronic equipment and storage medium
CN114138397B (en) Page display method and device, electronic equipment and storage medium
CN115544042A (en) Cached information updating method and device, equipment and medium
CN115617800A (en) Data reading method and device, electronic equipment and storage medium
CN114615273B (en) Data transmission method, device and equipment based on load balancing system
CN113220233A (en) Data reading method, device and system
CN115913954A (en) Cluster management information interaction method, device, equipment and storage medium
CN116028178A (en) Execution method and device of job in private cloud environment
CN115390992A (en) Virtual machine creating method, device, equipment and storage medium
CN113946414A (en) Task processing method and device and electronic equipment
CN116225979A (en) Data processing unit, data processing method and electronic equipment
CN115454660A (en) Task management method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination