CN113302593A - Task processing method, device and system, electronic equipment and storage medium - Google Patents

Task processing method, device and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN113302593A
CN113302593A CN201980089267.3A CN201980089267A CN113302593A CN 113302593 A CN113302593 A CN 113302593A CN 201980089267 A CN201980089267 A CN 201980089267A CN 113302593 A CN113302593 A CN 113302593A
Authority
CN
China
Prior art keywords
task
timing
execution
identity information
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980089267.3A
Other languages
Chinese (zh)
Inventor
成云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN113302593A publication Critical patent/CN113302593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements

Abstract

The application discloses a task processing method, a task processing device, a task processing system, electronic equipment and a storage medium, and relates to the technical field of data processing. Wherein, the method comprises the following steps: the task service unit distributes a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to an execution time, and each execution time is in a task period; in each task period, a plurality of task execution units acquire timing tasks with execution time in the current task period in a plurality of databases, wherein the timing tasks read by different task execution units are different; each task execution unit executes the acquired timing tasks, so that the processing pressure of the single task execution unit is reduced, and the accumulation of the timing tasks is prevented.

Description

Task processing method, device and system, electronic equipment and storage medium Technical Field
The present application relates to the field of data processing technologies, and in particular, to a task processing method, apparatus, system, electronic device, and storage medium.
Background
In various data processing of devices such as mobile terminals and servers, processing of timed tasks is often involved. Generally, a timing task is processed by a task execution process, and in the related art, the task execution process puts more pressure on the execution of the timing task, which easily causes task accumulation.
Disclosure of Invention
In view of the foregoing, the present application provides a task processing method, device, system, electronic device and storage medium to improve the foregoing problems.
In a first aspect, an embodiment of the present application provides a task processing method, where the method includes: the service unit distributes a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to an execution time; in each task period, a plurality of task execution units acquire timing tasks with execution time in the current task period in a plurality of databases, wherein the timing tasks read by different task execution units are different; and each task execution unit executes the acquired timing task.
In a second aspect, an embodiment of the present application provides a task processing system, where the system includes a task service unit, a task execution unit, and a database, where the task service unit is configured to allocate a plurality of timing tasks to a plurality of databases, where each timing task corresponds to an execution time; in each task period, a plurality of task execution units acquire timing tasks with execution time in the current task period in a plurality of databases, wherein the timing tasks read by different task execution units are different; and each task execution unit executes the acquired timing task.
In a third aspect, an embodiment of the present application provides a task processing device, where the task processing device includes: the task service unit is used for distributing a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to one execution time; in each task period, the plurality of task execution units are used for acquiring timing tasks with execution time in the current task period from the plurality of databases, wherein the timing tasks read by different task execution units are different; each task execution unit is further configured to execute the acquired timing task.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
The task processing method, the device, the system, the electronic equipment and the storage medium provided by the embodiment of the application distribute a plurality of timing tasks to a plurality of databases and are provided with a plurality of task execution units. In each task cycle, the plurality of task execution units acquire the timing tasks in the current task cycle from the plurality of databases, and each timing task is executed by the task execution unit acquiring the timing task, so that the processing pressure of a single task execution unit is reduced, and the timing tasks are prevented from being accumulated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 illustrates a data flow diagram in a task processing method according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a task processing method according to an embodiment of the present application.
Fig. 3 shows a data flow diagram in a task processing method according to another embodiment of the present application.
Fig. 4 shows another data flow diagram in a task processing method according to another embodiment of the present application.
Fig. 5 is a flowchart illustrating a task processing method according to another embodiment of the present application.
Fig. 6 is a functional block diagram of a task processing device according to an embodiment of the present application.
Fig. 7 shows a schematic structural diagram of a task processing system provided in an embodiment of the present application.
Fig. 8 shows a block diagram of an electronic device according to an embodiment of the present application.
Fig. 9 is a storage unit for storing or carrying program codes for implementing a task processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The timing task is a task which needs to be executed regularly, such as regular statistical data or data deletion. When the timed task is executed periodically, the timed task may have an execution time and an execution period. The execution time is a time for starting execution, or processing, of the timed task, and the execution period is an interval time between two adjacent execution times. And after the timing task is executed in each execution time, the execution time plus the time after the execution period is used as the execution time for executing the timing task next time. If the time for starting execution of a certain timing task is t1 and the period is c1, t1 is the execution time of the timing task, and after the timing task is executed, the next execution time is t1+ c1, and after the timing task is executed at the time of t1+ c1, the next execution time is t1+ c1+ c1, and so on. The execution periods of different timing tasks may be different, the execution times of different timing tasks may be different, the same or different execution times of different timing tasks, the same or different execution periods of different timing tasks, may be determined by each timing task itself, and are not affected by each other.
In the related art, the processing of the timing task generally has the following two embodiments.
In one embodiment, one of the plurality of task execution units is designated by the configuration file to execute the timed task, and the other task execution units do not execute the timed task. Specifically, which task execution unit executes the timing task is specified by the configuration file and cannot be dynamically adjusted, when the task execution process crashes, all the timing tasks cannot be executed, the configuration in the configuration file needs to be manually modified and then reissued, and the overall service availability is poor.
In another embodiment, a registry is introduced. And registering each task execution unit in a registry, wherein the registry selects one task execution unit from the plurality of task execution units to execute the timing task, and other task execution units are used as backups and do not execute the timing task. When the task execution unit executing the timing task is abnormal, the registration center can dynamically switch one other task execution unit to execute the task.
Various problems exist in the processing method of these timing tasks.
For example, execution units that execute timed tasks are often stressed. Specifically, after the timed task is stored in the database, a plurality of task execution units are started, but only one task execution unit really executes the timed task, and other task execution units serve as backup nodes, so that the task execution unit executing the timed task has high data processing pressure and is easy to cause data accumulation; and other task execution units serving as backup nodes do not execute tasks, so that resource waste is caused.
For example, a task execution unit performing task execution generally loads a timed task into a memory and then slowly executes the timed task, and after the timed task is loaded into the memory, if the task is deleted or modified, the deleted task may be executed by the task execution unit, or the task executed by the task execution unit is not up-to-date.
For example, if a task execution unit performing task execution crashes during task execution, the database needs to be scanned in full after a new task execution unit is determined, resulting in long execution time and high database pressure.
In order to overcome the various disadvantages, the inventor proposes a task processing method, device, system, electronic device and storage medium provided by the embodiments of the present application, which distributes a plurality of timed tasks to a plurality of databases and sets a plurality of task execution units. In each task period, the task execution units acquire the timing task in the current task period from the database to execute. The following describes the task processing method, device, system, electronic device, and storage medium in detail through specific embodiments.
Fig. 1 illustrates a data flow diagram in an embodiment provided by the present application, where a task service unit distributes a timing task to multiple databases, and in each task cycle, the timing task in the task cycle in the multiple databases is redistributed to each task execution unit for execution. Fig. 2 shows a flowchart of a task processing method in this embodiment. Referring to fig. 2, the method includes:
step S110: the task service unit distributes a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to an execution time.
Timing tasks are typically generated during internet activities or during the use of the device. Such as a timed task of a timed reminder set in the electronic device by the user, a timed task of sending a message to the server at a timing requested in the electronic device, a timed task of calculating the investment profit of the user at a certain point of time every day set in the server, etc.
The task service unit is responsible for distributing the timing task to the plurality of databases, namely when the timing task is generated, the task service unit distributes the timing task to the plurality of databases.
The generation of the timing task may be the setting and existence of the timing task, and if the timing task is set in the server and the investment profit of the user is calculated at a certain time point every day, the timing task is generated when the server sets the timing task. The plurality of databases are used for storing the timing tasks, the number of the databases is not limited, and the databases can be set according to actual requirements during actual use.
When the task service unit distributes data, if a plurality of timing tasks are generated at the same time, the plurality of timing tasks are distributed to a plurality of databases; if a timing task is generated at the same time, the timing task is distributed to one of the databases. The task service unit may evenly distribute the timed tasks for data processing and storage balancing.
As an embodiment, the granularity of the allocation of the timed task may be database-level, i.e. the database is the smallest unit of allocation. For data processing and storage balancing, the task service unit may distribute tasks evenly to multiple databases. For example, when the timed task is distributed, the timed tasks are distributed to the database more frequently, so that the timed tasks in the databases are balanced, and the difference of the number of the timed tasks in the databases is minimum.
As an embodiment, the allocation granularity at which the timed task allocates may be table-level, i.e., the table is the smallest unit of allocation. Each database comprises one or more tables, and each table is used for storing a timing task, namely when the task service unit distributes the timing tasks to the databases, each timing task is stored in the table of the database. For data processing and storage balancing, the task service unit can equally distribute tasks to the tables of the plurality of databases, that is, the number of the databases to which each table is distributed is balanced. For example, when the number of timing tasks in a table is small, the timing tasks tend to be distributed to the table more, so that the timing tasks in the tables are balanced, and the difference of the number of timing tasks in the tables is minimum.
It can be understood that the size of the allocation granularity is not limited in the embodiment of the present application, and the number of the stored timing tasks in each storage space corresponding to the minimum unit of allocation is balanced corresponding to different allocation granularities.
In addition, when the task service unit distributes the timing task, the timing task can be distributed to the database through a preset fragmentation algorithm. The specific fragmentation algorithm is not limited in the embodiments of the present application, and is not listed.
It is to be understood that the "plurality of timing tasks" referred to as "the task service unit allocates the plurality of timing tasks to the plurality of databases" in this step is not limited to generating the allocated timing tasks once, and all the timing tasks allocated via the task service unit belong to the category of the plurality of timing tasks.
Step S120: in each task period, a plurality of task execution units acquire timing tasks with execution time in the current task period in a plurality of databases. Wherein, the timing tasks read by different task execution units are different.
In the embodiment of the present application, time may be divided into one task period, that is, divided in units of task periods on a time axis. For example, the task period is 1 minute, each minute of 24 hours a day is divided into one task period, for example, 0 hour to 0 hour 1 is divided into one task period, 0 hour 1 to 0 hour 2 is divided into one task period, 0 hour 2 to 0 hour 3 is divided into one task period, and so on.
Optionally, in this embodiment of the present application, the size of the task cycle may be set according to the time precision of the execution time of the timing task. The time precision of the execution time represents a minimum time unit of the execution time of the timed task, and one task cycle may be equal to the minimum time unit. Correspondingly, in the embodiment of the application, the time precision of the execution time of each timing task is the same. For example, if the execution time of the timing task is a certain minute, the minute is the time precision of the execution time of the timing task, and one task cycle may be 1 minute.
Of course, in the embodiment of the present application, the time length of the task cycle is not limited, and may be, for example, longer than the time precision of the execution time of the timing task. The determination method of the task period is not limited, and in practical application, the time length of the task period may be set as needed. In the embodiment of the present application, 1 minute is taken as an example of one task period.
In each task period, the plurality of task execution units scan the database, and all the plurality of task execution units acquire the timing task in the task period for execution. The timing task in the current task period is the timing task with the execution time in the current task period. For example, if the current task period is 0 hour 2 to 0 hour 3, and the execution time of a certain timing task in a certain database is 0 hour 2, the timing task is acquired; for another example, if the current task period is 0 hour 2 minute to 0 hour 3 minute, and the execution time of a certain timed task in a certain database is 0 hour 2 minute 30 seconds, the timed task is acquired. Each timing task is acquired by one task execution unit, namely one timing task cannot be acquired by a plurality of task execution units, namely the timing tasks read by different task execution units are different.
Optionally, in this embodiment, in order to well adapt to the problem of the scaling of the task execution units and balance the processing pressure of each task execution unit, as shown in fig. 3, the acquisition process and the execution process of the timing task may also be asynchronous through a message queue.
Specifically, in each task cycle, the message queue obtains identity information of the timed task in the current task cycle from a plurality of databases. In each task period, each database can push the identity information of the timing task in the current task period to a message queue; or in each task period, the message queue pulls the identity information of the timing task in the current task period from each database. And each task execution unit acquires the identity information of the timing task from the message queue as the specified identity information. And each task execution unit acquires the timing task corresponding to the specified identity information from the corresponding database. That is, each timing task acquires the identity information of the timing task from the message queue, determines the database where the timing task is located, and acquires the detailed information of the timing task from the database where the timing task is located for execution according to the identity information.
In addition, the database in which the task execution unit determines the timing task in the embodiment of the present application may have different implementations.
In one embodiment, the identity information of the timed task written into the message queue includes information about a database in which the timed task is located. The task execution unit may determine, according to the information related to the database in the identity information, the database where the timing task corresponding to the identity information is located.
As another embodiment, when the task service unit allocates the plurality of timing tasks to the plurality of databases, the plurality of timing tasks may be allocated to the plurality of databases according to a preset fragmentation algorithm, and a specific fragmentation algorithm is not limited in this embodiment of the present application.
When the task execution unit acquires the timing task corresponding to the designated identity information from the corresponding database, each task execution unit may determine, according to a preset fragmentation algorithm, a database in which the timing task corresponding to the designated identity information is stored; and acquiring a timing task corresponding to the specified identity information from the determined database. That is to say, since which database the timed task is specifically put into is determined according to the preset fragmentation algorithm, the task execution unit may determine, according to the preset fragmentation algorithm, the database where the timed task corresponding to the acquired identity information is located, so as to acquire detailed information of the timed task corresponding to the identity information from the database. If the database into which the timed task needs to be put needs to be calculated through a preset fragmentation algorithm in combination with other information when the timed task is put into the database, the other information can also be put into a message queue as a part of the identity information of the timed task, so that after the task execution unit obtains the identity information, the database in which the timed task corresponding to the identity information is located is calculated in combination with the other information and the preset fragmentation algorithm.
Optionally, in order to reasonably and uniformly utilize each task execution unit and keep the data processing pressure of each task execution unit similar, a task scheduling unit may be set, and the task scheduling unit may be a distributed timing scheduling service. And distributing the databases corresponding to the task execution units through the task scheduling unit, and acquiring the timing task in the current period from the corresponding database by the task execution unit. The distribution principle may be that the number of the minimum units corresponding to each task execution unit is kept balanced, and the difference between the number of the minimum units of the distribution granularity corresponding to each task execution unit is as minimum as possible, for example, task distribution may be performed for each task execution unit through a corresponding fragmentation algorithm. In the embodiment of the present application, the distribution granularity when the database corresponding to each task execution unit is distributed may be the same as the distribution granularity when a plurality of timed tasks are distributed to a plurality of databases.
For example, as an embodiment, the distribution granularity for distributing the databases corresponding to the task execution units may be at a database level, the databases that each task execution unit needs to acquire the timed task may be uniformly set, and the number of the databases corresponding to the task execution units may be distributed in a balanced manner, so that the difference between the numbers of the databases corresponding to the task execution units is minimized as much as possible. For example, if n +1 task execution units are assumed, the number of each task execution unit is 0, 1, 2, 3 to n, and m +1 databases, the number of each database is 0, 1, 2, 3 to m, and if n is equal to or less than m, the database corresponding to the task execution unit No. 0 is a database whose id% (n +1) is equal to 0, the number of the database corresponding to the task execution unit No. 0 is the number of databases whose id% (n +1) is equal to 0, the database corresponding to the task execution unit No. 1 is a database whose id% (n +1) is equal to 1, the number of the database corresponding to the task execution unit No. 1 is the number of databases whose id% (n +1) is equal to 1, the database corresponding to the task execution unit No. 2 is a database whose id% (n +1) is equal to 2, and so on, wherein id represents the number of the database; if n is greater than or equal to m, each task execution unit in the m +1 task execution units corresponds to one database, and other task execution units do not have corresponding databases.
For another example, as another embodiment, the distribution granularity of the database corresponding to each task execution unit may be at a table level. In this embodiment, the number of tables corresponding to each task execution unit can be allocated in a balanced manner, and the difference in the number of tables corresponding to each task execution unit can be minimized as much as possible.
In addition, after the task scheduling unit allocates the databases corresponding to the task execution units, if any of the task execution units is crashed and cannot perform data processing normally, the processing task corresponding to the task execution unit may be allocated to another task execution unit. For example, if the minimum unit of the distribution granularity is a database, the database corresponding to the crashed task execution unit is distributed to other task execution units; and if the minimum unit of the distribution granularity is a table, distributing the table corresponding to the crashed task execution unit to other task execution units. The allocation method may be the same as the aforementioned allocation method, and will not be described herein.
Optionally, when the task execution unit obtains the timing task in the current task period from the corresponding database, a specific obtaining manner is not limited, for example, all the timing tasks in the current task period that can be obtained by the task execution unit can be obtained from the corresponding database at one time; or the acquisition can be carried out at a certain speed, and the acquisition is carried out at the same time; or executing a timing task and then acquiring the next timing task; or a certain number of timing tasks are acquired each time, and the timing tasks in the current period are acquired in a plurality of times.
Optionally, if there is no timing task that needs to be executed in a certain task period, that is, the execution times of all the current timing tasks are not in the period, if the task execution unit still scans the database to determine which timing tasks to acquire, an unnecessary processing procedure may be generated, and unnecessary stress may be generated on the database and the task execution unit. Therefore, in the embodiment of the present application, the task service unit may count the total number of the timing tasks in each task period, and specifically may count the total number of the timing tasks in each task period within a preset time length. When a timing task is newly added, the number of the timing tasks in the task period of the timing task is correspondingly increased; and after the timing task is deleted, the number of the timing tasks in the task period of the timing task is correspondingly reduced. For example, with 1 minute as a task period, the number of timed tasks per minute for 24 hours a day may be counted.
When the plurality of task execution units acquire the timing tasks with the execution time in the current task period in the plurality of databases, for each task execution unit, the total number of the timing tasks in the current task period counted by the task service unit can be read. If the total number of the timing tasks in the current task period is read to be larger than 0, the timing tasks needing to be processed in the current task period are indicated, and the task execution unit acquires the timing tasks in the current task period from the corresponding database to execute the timing tasks. And if the total number of the timing tasks in the current task period is read to be equal to 0, which indicates that no timing task needs to be processed in the current task period, waiting for the next task period. And when the next task period is waited, determining the number of the timing tasks in the task period.
Further, optionally, the task service unit may store the counted number in any one of the above databases or a database dedicated to storing the counted number of timed tasks. The task execution unit may read the number of timed tasks per task period from a database storing the number of timed tasks.
Step S130: and each task execution unit executes the acquired timing task.
And each task execution unit executes the acquired timing task. For example, if a certain timed task is to delete certain data, the task execution unit performs an operation of deleting the data.
It is understood that in this embodiment, there is not necessarily a strict order of precedence between step S110 and step S120. For example, each time a timed task is generated, the task service unit distributes the timed task to the database. And in the corresponding task execution cycle, the task execution unit normally acquires and executes the timing task.
In the embodiment of the application, a plurality of timing tasks are distributed to a plurality of databases by the task service unit. In each task period, the plurality of task execution units acquire the timing tasks in the current task period in the plurality of databases to execute, so that the processing pressure of a single task execution unit is reduced, and the processing efficiency is improved.
In the embodiment, the timing task can be read through the task reading unit, the timing task is executed through the task executing unit, reading and execution of the timing task are asynchronous, and convenience in expansion and contraction of the task executing unit is better achieved. Specifically, fig. 4 shows a data flow diagram of the task processing method of the embodiment, where the task reading unit is used to read the timed task from the database. Specifically, referring to fig. 5, the method includes:
step S210: the task service unit distributes a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to an execution time.
This step can be referred to as the aforementioned step S110, and is not described herein again.
Step S220: and in each task period, the plurality of task reading units read the identity information of the timed task in the current task period from the plurality of databases.
In the embodiment of the application, the timed task in each task cycle is read by the task reading unit, that is, in each task cycle, the plurality of task executing units scan the databases, and read the timed task in the current task cycle from the plurality of databases. The task reading unit reads the identity information of the timing task.
In order to reasonably and uniformly utilize each task reading unit and obtain the task reading speed as high as possible on the number of the current task reading units, the reading tasks of each task reading unit can be distributed, namely, the database corresponding to each task reading unit is distributed, and the task reading unit obtains the timing tasks in the current period from the corresponding database.
The distribution principle may be that the number of the minimum units corresponding to each task reading unit is kept balanced, and the difference between the number of the minimum units of the distribution granularity corresponding to each task reading unit is as minimum as possible, for example, the distribution of the reading tasks for each task reading unit may be implemented by a corresponding fragmentation algorithm. If the distribution granularity of the reading tasks of the distribution task reading unit is in a database level, and 5 databases and 5 task reading units exist, each task reading unit reads the timing task from one corresponding database in each task period; if 6 databases and 5 task reading units exist, the 4 task reading units read the timing tasks from one corresponding database in each task period, and the 1 task reading unit reads the timing tasks from the 2 corresponding databases in each task period; if there are 5 databases and 6 task reading units, each task reading unit in the 5 task reading units reads the timing task from a corresponding database in each task period, and 1 task reading unit is idle. In the embodiment of the present application, the distribution granularity when the database corresponding to each task execution unit is distributed may be the same as the distribution granularity when the plurality of timed tasks are distributed to the plurality of databases.
It can be understood that, in the embodiment of the present application, a specific allocation manner for allocating the databases corresponding to the task reading units may refer to an allocation manner for allocating the databases corresponding to the task execution units in the foregoing embodiment; in addition, in the foregoing embodiment, the specific allocation manner for allocating the databases corresponding to the task execution units may also be referred to in the embodiment of the present application.
In the embodiment of the application, a task scheduling unit may be provided, and a database that needs to be read by each task reading unit is set by the task scheduling unit to serve as a reading task of the corresponding task reading unit. When the task reading unit is started, the node information of the task reading unit can be registered to the task scheduling unit, and when the task scheduling unit distributes the reading tasks, the task scheduling unit distributes the reading tasks to all the task reading units registered to the task scheduling unit.
Optionally, after the task scheduling unit allocates the read tasks of each task reading unit, if some of the task reading units are crashed and abnormally quit, the task scheduling unit may allocate the read tasks of the task reading unit to other task reading units. For example, if the minimum unit of the distribution granularity is a database, the database corresponding to the crashed task reading unit is distributed to other task reading units; and if the minimum unit of the distribution granularity is a table, distributing the table corresponding to the crashed task reading unit to other task reading units. The task reading unit of the reading task allocated to the crashed task reading unit can only scan the newly allocated reading task, so that all databases do not need to be scanned, and the scanning pressure is reduced.
The allocation manner of allocating the reading task of the task reading unit to other task reading units is consistent with the allocation manner of allocating the reading task, and may be referred to each other, which is not described herein again. When the node information of the task reading unit is registered to the task scheduling unit, the node information can be registered to a registration center of the task scheduling unit in a temporary node mode. When the task reading unit crashes and abnormally exits, the temporary node is deleted, so that when the temporary node is deleted, the task scheduling unit can know that the task reading unit corresponding to the temporary node crashes and abnormally exits, and the task of the task reading unit is distributed to other task reading units.
Optionally, if there is no timing task that needs to be executed in a certain task period, that is, the execution times of all the current timing tasks are not in the period, if the task reading unit still scans the database to determine which identity information of the timing tasks is read, unnecessary database scanning may be generated, and pressure may be applied to the database. Therefore, in the embodiment of the present application, the task service unit may count the total number of the timing tasks in each task period, and specifically may count the total number of the timing tasks in each task period within a preset time length. When a timing task is newly added, the number of the timing tasks in the task period of the timing task is correspondingly increased; and after the timing task is deleted, the number of the timing tasks in the task period of the timing task is correspondingly reduced.
In each task period, when the plurality of task reading units read the identity information of the timing tasks in the current task period from the plurality of databases, the total number of the timing tasks in the current task period can be read for each task reading unit. And if the total number of the timing tasks read by the task reading unit is greater than 0, reading the identity information of the timing tasks in the current task period from the corresponding database, and if the total number of the timing tasks read by the task reading unit is equal to 0, waiting for the next task period. If a certain task reading unit reads that the total number of the timing tasks in the current task period is greater than 0, reading the reading tasks from the corresponding database; if the task reading unit reads that the total number of the timing tasks in the current task period is equal to 0, which indicates that no timing task needs to be processed in the current task period, the task reading unit can wait for the next task period to read.
Further, optionally, the task service unit may store the counted number in any one of the above databases or a database dedicated to storing the counted number of timed tasks. The task reading unit may read the number of the timed tasks in each task period from a database storing the number of the timed tasks.
In the embodiment of the present application, the reading of the total number of the timed tasks in each task period by the task reading unit is similar to the reading of the total number of the timed tasks in each task period by the task executing unit in the foregoing embodiment, and may refer to each other.
Step S230: and each task reading unit writes the read identity information into the message queue.
And in each task period, after the task reading unit reads the identity information, writing the read identity information into the message queue. Of course, in the embodiment of the present application, in each task cycle, the message queue may also pull the identity information read by the task reading unit from the task reading unit.
Optionally, when the task reading unit reads the timing task in the current task period from the corresponding database, the specific reading mode is not limited, for example, all the reading tasks in the current period can be read at one time; reading at a certain speed and writing into the message queue while reading; or reading a certain number of timing tasks each time, reading the timing tasks in the current period in a plurality of times, and writing the timing tasks into the message queue.
Optionally, when the task reading unit writes the identity information into the message queue, the task reading unit may write the identity information into the message queue in batches, or write all the read identity information in the current period into the message queue at one time, or write the read identity information in the current period into the message queue one by one.
Step S240: and each task execution unit acquires the identity information of the timing task from the message queue as the specified identity information.
The task execution unit acquires the identity information of the timing task in the message queue and defines the identity information acquired by the task execution unit as the specified identity information.
As an embodiment, the task execution unit may obtain the identity information in a manner that the message queue pushes the identity information obtained from each task reading unit to each task execution unit. Specifically, the message queue may push the identity information to each task execution unit one by one, or push the identity information to each task execution unit in a batch (i.e., in multiple batches). Optionally, the message queue may push the identity information to each task execution unit on average, so that each task execution unit processes the timing task of the current task period on average, for example, pushing the same amount of identity information to each task execution unit each time. Optionally, the message queue may push identity information to a task execution unit with the smallest number of unexecuted timing tasks among the plurality of task execution units; or pushing identity information to the task execution units of which the number of the timing tasks which are not executed is less than a preset number; or pushing the identity information to the task execution unit of which the distributed timing task is executed.
As an embodiment, the task execution unit may obtain the identity information by actively reading from the message queue. Specifically, each task execution unit may read identity information from the message queue one by one, or a batch of identity information. Each task execution unit can read from the message queue when the number of the timing tasks to be executed, which are acquired by the task execution unit, is less than a certain number; each task execution unit may also read the identity information from the message queue after executing all the timing tasks corresponding to the acquired identity information.
Optionally, the task execution unit may push the identity information to the task execution unit or pull the identity information from the message queue of the task execution unit without considering the problem of the task period, and execute the timing task corresponding to the obtained identity information. However, since the task reading unit periodically puts the identity information of the timed task into the message queue, the task executing unit correspondingly realizes the periodic processing.
It can be understood that if certain identity information in the message queue is acquired by the task execution unit, the identity information is deleted from the message queue, so as to avoid repeated execution due to secondary acquisition by other task execution units.
Step S250: and each task execution unit acquires the timing task corresponding to the specified identity information from the corresponding database.
After acquiring the identity information, the task execution unit acquires detailed information of the timing task corresponding to the identity information from a database where the timing task corresponding to the identity information is located for execution.
The manner in which the task execution unit determines the database in which the timing task is located may refer to the manner in which the task execution unit determines the database in which the timing task is located in the foregoing embodiment, and details are not repeated here.
Step S260: and each task execution unit executes the acquired timing task.
And each task execution unit executes the timing task according to the acquired detailed information of the timing task.
The timing tasks may change after they are generated. Such as the user deleting a certain timed task or the content of a certain timed task being modified. The deleted timing task can not be executed any more, otherwise, the deletion operation of the user is invalid; and for the modified timing task, executing according to the modified latest data, otherwise, the modification is invalid, and the execution of the timing task does not conform to the modification. Therefore, optionally, in the embodiment of the present application, a storage unit for responding to operations such as deletion and modification of a timing task in time may be further introduced, where the storage unit may be a cache, and the embodiment of the present application takes the cache as an example for description.
In one embodiment, if a timed task is deleted, the task service unit writes the identity information of the timed task into the cache. Before executing the acquired timing task, each task execution unit can judge whether the acquired timing task is a deleted timing task according to the identity information stored in the cache. If not, the task execution unit executes the timing task, and if so, the task execution unit abandons the execution of the timing task. That is to say, before executing a timing task, each task execution unit judges whether the identity information of the timing task is stored in the cache, if the identity information of the timing task is stored, the timing task is deleted, and the timing task is not executed any more, the execution process is finished, and the processing of the next timing task is performed; if the identity information of the timing task is not stored in the cache, the timing task is not deleted, and the timing task needs to be normally executed, so that the timing task is executed.
In one embodiment, if a timed task is modified, the task service unit writes the identity information of the timed task into the cache. Before executing the acquired timing task, each task execution unit can judge whether the acquired timing task is modified according to the identity information stored in the cache. If not, the task execution unit executes the timing task, and if so, the task execution unit acquires the timing task from the database again for execution. That is to say, before executing a timing task, each task execution unit determines whether the cache stores the identity information of the timing task, and if the cache does not store the identity information of the timing task, it indicates that the timing task is not modified, and executes the timing task according to the currently acquired detailed information of the timing task. If the identity information of the timing task is stored in the cache, the timing task is modified, the timing task needs to be executed according to the modified data, and then the detailed information of the timing task is obtained from the database again for execution.
In addition, the execution time of each timed task is within one task period. And the timing task which is successfully executed in a certain task period can not be executed any more, otherwise, the timing task is repeatedly executed. Therefore, in one embodiment, after the task execution unit successfully executes a certain timing task, the identity information of the timing task is written into the cache. Certainly, after the task execution unit successfully executes a certain timing task, other units, such as a task service unit, may also write the identity information of the timing task into the cache. Before executing the acquired timing task, each task execution unit can judge whether the acquired timing task is successfully executed in the current task period according to the identity information stored in the cache. If not, the task execution unit executes the timing task, if so, the timing task is indicated to be successfully executed, the timing task is not executed any more, the execution process is ended, and the processing of the next timing task is carried out. That is to say, before executing a timing task, each task execution unit determines whether the cache stores the identity information of the timing task, and if the cache does not store the identity information of the timing task, it indicates that the timing task is not executed, and executes the timing task according to the currently acquired detailed information of the timing task. If the identity information of the timing task is stored in the cache, the timing task is executed, and the timing task does not need to be executed again.
In this embodiment, the above three embodiments of caching and storing identity information may be set simultaneously. When the three implementation modes are set simultaneously, the cache can be divided into different storage spaces, and the identity information of the deleted timing task, the identity information of the modified timing task and the identity information of the successfully executed timing task are stored in the different storage spaces. When the task execution unit judges whether the identity information of the timing task is stored in the cache, if so, the task execution unit determines whether the timing task corresponding to the identity information is deleted, modified or successfully executed according to the space in which the identity information is stored.
In addition, in the embodiment of the present application, after the task execution unit stops executing the timed task or obtains information again according to the identity information of the timed task in the cache, in order to avoid that the task execution unit makes a false judgment when making a second judgment because the identity information is still in the cache, the task that has been successfully executed is repeatedly executed, or the task that has been deleted is executed, or the timed task that has been modified is obtained again and repeatedly executed, in the embodiment of the present application, the cache may be cleaned.
Specifically, the cache is cleaned, that is, the task service unit sets a storage duration for the identity information stored in the cache; and when the cached identity information reaches the storage time, the task service unit deletes the identity information from the cache. For example, the storage time length of the identity information of the deleted timed task in the cache is set to be T1, and when the storage time of the identity information in the cache reaches T1, the identity information is deleted from the cache; and setting the storage time length of the identity information of the modified timed task in the cache to be T2, and modifying the identity information from the cache when the storage time of the identity information in the cache reaches T2.
In the embodiment of the application, the storage duration of the identity information stored in the cache can be set to be smaller than the task period, so that the task execution unit can normally judge according to the identity information in the cache in the next task period.
Optionally, each timing task has an execution time, and if the time for executing a certain timing task exceeds the execution time of the timing task and the time that exceeds the execution time reaches the preset threshold, the timing task may not be executed any more. Specifically, before executing the acquired timing task, each task execution unit may further determine whether a duration of the current time exceeding the execution time of the timing task reaches a preset threshold. If the current time does not exceed the execution time of the timing task, or the current time exceeds the execution time of the timing task and does not reach a preset threshold value, the interpretation result is negative, and the task execution unit executes the timing task. If the current time exceeds the execution time of the timing task and the exceeded time reaches a preset threshold, the task execution unit abandons the execution of the timing task and updates the next execution time of the timing task if the judgment result is yes. The specific value of the preset threshold is not limited, and may be, for example, a task period, or a minimum time unit corresponding to the time precision of the timing task (for example, if the time precision of the timing task is minute, the preset threshold is 1 minute). The next execution time of the timed task may be updated by adding the current execution time of the timed task to the execution cycle of the timed task. The next execution time can be updated directly by the task execution unit or be handed over to other units, such as the task service unit, for updating.
In addition, in the embodiment of the present application, after the task execution unit executes the timing task, different processing manners may be used for the timing task that is successfully executed and the timing task that is failed to be executed.
If the timing task is successfully executed, the next execution time of the timing task can be updated, and the timing task is acquired again to be executed when the next execution time is in the task period.
If the timing task fails to be executed, the task execution unit or the task service unit may put the identity information of the timing task into a message queue to continue to wait for execution. For the timing task which fails to execute, if the duration that the current time exceeds the execution time of the timing task reaches a preset threshold, the identity information of the timing task is not put into a message queue any more, and the next execution time of the timing task is updated.
In the embodiment of the present application, each unit may be a software module, a hardware module, an application program, a service process, a server, an electronic device, and the like, which are not limited in the embodiment of the present application. When each unit is a hardware device such as an electronic device or a server, the functions of different units may be implemented by the same hardware device, or the functions of different units may be implemented by different hardware devices, or the functions of some units may be implemented by one hardware device, and the functions of some units may be implemented by a plurality of hardware devices.
In addition, the steps performed by the respective units are not limited. That is, the steps executed by one of the units may also be executed by other units, for example, the execution steps of the task scheduling unit may also be executed by the task service unit; or steps executed by one unit can be divided into a plurality of units to be executed. In addition, other units may be included in the embodiments of the present application to perform some of the steps. In the task processing method provided in the embodiment of the present application, which units are used to execute the actual execution tasks of each step, and how many units are used to execute the actual execution tasks are not limited in the embodiment of the present application, so that the method provided in the embodiment of the present application can be implemented.
In the task processing method provided by the embodiment of the application, the reading and the execution of the timing task are asynchronous through the task reading unit and the message queue, and the expansion and contraction capacity of the task execution unit can be well adapted, so that the increase and the decrease of the task execution unit can be conveniently carried out according to the actual processing quantity of the timing task. In addition, the operations such as deletion and modification of the timing task are responded in time through the cache, and the repeated execution or the error execution of the timing task is avoided.
Referring to fig. 5, an embodiment of the present application further provides a task processing device 300, where each software module included in the task processing device 300 may be a task service unit 310 and a task execution unit 320, respectively.
The task service unit 310 is configured to allocate a plurality of timing tasks to a plurality of databases, where each timing task corresponds to an execution time; in each task cycle, the task execution units 320 are configured to obtain timing tasks in the databases, where execution time of the timing tasks is within a current task cycle, where timing tasks read by different task execution units 320 are different; each task execution unit 320 is also configured to execute the acquired timing task.
Optionally, in each task period, each task execution unit 320 is configured to obtain, as the specified identity information, identity information of the timed task from a message queue, where the identity information in the message queue is the identity information of the timed task in the current task period obtained from the multiple databases, and the timed task in the current task period is a timed task whose execution time is in the current task period; each task execution unit 320 obtains the timing task corresponding to the designated identity information from the corresponding database.
As shown in fig. 6, optionally, the module may further include a task reading unit 330, where the plurality of task reading units 330 are configured to read the identity information of the timed task in the current task period from the plurality of databases; each task reading unit 330 is configured to write the read identity information into the message queue.
Optionally, the task service unit 310 may further be configured to count the total number of timing tasks in each task period. For each task reading unit 330, the total number of timing tasks in the current task period may be read; and if the total number of the read timing tasks is larger than 0, reading the identity information of the timing tasks in the current task period from the corresponding database, and if the total number of the read timing tasks is equal to 0, waiting for the next task period.
Optionally, as shown in fig. 6, in the embodiment of the present application, a task scheduling unit 340 may be further included, configured to set a database that needs to be read by each task reading unit, as a reading task of the corresponding task reading unit.
Optionally, if a task reading unit exits abnormally, the task scheduling unit 340 may allocate the reading task of the task reading unit 330 to another task reading unit.
Optionally, the task service unit 310 may be configured to allocate the plurality of timed tasks to a plurality of databases according to a preset fragmentation algorithm. Each task execution unit 320 may be configured to determine, according to a preset fragmentation algorithm, a database in which a timing task corresponding to the specified identity information is stored; and acquiring a timing task corresponding to the specified identity information from the determined database.
Optionally, if a timed task is deleted, the task service unit 310 may be configured to write the identity information of the timed task into the cache. For each task execution unit 320, the task execution unit may be configured to determine whether the obtained timing task is a deleted timing task according to the identity information stored in the cache; if not, the task execution unit executes the timing task, and if so, the task execution unit abandons the execution of the timing task.
Optionally, if some timing task is modified, the task service unit 310 may further be configured to write the identity information of the timing task into a cache. For each task execution unit 320, the task execution unit may be configured to determine whether the acquired timing task is modified according to the identity information stored in the cache; if not, the task execution unit 320 executes the timing task, and if so, the task execution unit retrieves the timing task from the database for execution.
Optionally, the task service unit 310 may be configured to set a storage duration for the identity information stored in the cache; when the cached identity information reaches the storage duration, the task service unit 310 deletes the identity information from the cache.
Optionally, the storage duration is less than the task period.
Optionally, for each task execution unit 320, it may be configured to determine whether a duration that a current time exceeds the execution time of the timing task reaches a preset threshold; if not, the task execution unit 320 executes the timing task, and if so, the task execution unit 320 abandons the execution of the timing task and updates the next execution time of the timing task.
Optionally, for a timing task that fails to execute, the task execution unit 320 or the task service unit 310 may be configured to put the identity information of the timing task into the message queue to continue waiting for execution.
Optionally, for a timing task that is successfully executed, the task execution unit 320 or the task service unit 310 may be configured to update the next execution time of the timing task.
Optionally, the task service unit 310 may be configured to count the total number of timing tasks in each task period. For each task execution unit 320, the task execution unit may be configured to read the total number of timed tasks in the current task period; and if the total number of the read timing tasks is larger than 0, acquiring the timing tasks in the current task period from the corresponding database, and if the total number of the read timing tasks is equal to 0, waiting for the next task period.
Optionally, the task scheduling unit 340 may be configured to uniformly set a database in which each task execution unit needs to acquire a timing task.
As shown in fig. 7, an embodiment of the present application further provides a task processing system 400. The system includes a task service unit 410, a task execution unit 420, and a database 430. The task service unit 410 is configured to allocate a plurality of timing tasks to a plurality of databases 430, where each timing task corresponds to an execution time; in each task cycle, the task execution units 420 acquire the timing tasks in the databases 430, the execution time of which is within the current task cycle, wherein the timing tasks read by different task execution units 420 are different; each task execution unit 420 executes the acquired timing task.
In the system, the task service unit 410, the task execution unit 420, and the database 430 may be laid out to different servers or the same server.
The system may further include other units such as a task scheduling unit, and the functions of the units included in the system may refer to the foregoing method embodiment and apparatus embodiment, which are not described herein again.
It will be clear to those skilled in the art that, for convenience and brevity of description, the various method embodiments described above may be referred to one another; the various embodiments of each method embodiment may also be referred to one another. For the specific working processes of the above-described devices and modules, reference may be made to corresponding processes in the foregoing method embodiments, which are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The scheme provided by the embodiment of the application can be used for a large number of scenes of periodic timing tasks in a distributed environment. When the timing task is executed in a distributed environment, the pressure of each node of a task execution service (consisting of a plurality of task execution units) is balanced based on the distributed timing task scheduling service, asynchronous processing is realized by using a message queue to deal with the difference of the production and consumption speeds of the timing task, and the timing task is cleaned by a cache, so that the reliability, the real-time performance and the service overall availability of the timing task execution can be guaranteed.
Referring to fig. 8, a block diagram of an electronic device 500 according to an embodiment of the present disclosure is shown. The electronic device 500 may be an intelligent terminal such as a mobile phone, a computer, or a tablet, or may be a server. The electronic device includes one or more processors 510 (only one of which is shown), memory 520, and one or more programs. Wherein the one or more programs are stored in the memory 520 and configured to be executed by the one or more processors 510. The one or more programs are configured to perform the methods described in the foregoing embodiments.
In the embodiment of the present application, the one or more programs may be an application program and each fast application, respectively.
Processor 510 may include one or more processing cores. The processor 510 interfaces with various components throughout the electronic device 500 using various interfaces and circuitry to perform various functions of the electronic device 500 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 520 and invoking data stored in the memory 520. Alternatively, the processor 510 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 510 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like.
The Memory 520 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 520 may be used to store instructions, programs, code sets, or instruction sets. The memory 520 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function, instructions for implementing the various method embodiments described above, and the like. The storage data area may also be data created by the server in use, and the like.
Referring to fig. 9, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 600 has stored therein program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 600 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 600 includes a non-volatile computer-readable storage medium. The computer readable storage medium 600 has storage space for program code 610 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 610 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (20)

  1. A method for processing a task, the method comprising:
    the task service unit distributes a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to an execution time;
    in each task period, a plurality of task execution units acquire timing tasks with execution time in the current task period in a plurality of databases, wherein the timing tasks read by different task execution units are different;
    and each task execution unit executes the acquired timing task.
  2. The method according to claim 1, wherein in each task cycle, the obtaining, by the task execution units, the timed task with the execution time within the current task cycle in the databases comprises:
    at each of the duty cycles, the task is completed,
    each task execution unit acquires the identity information of the timed task from the message queue as the designated identity information, wherein the identity information in the message queue is the identity information of the timed task in the current task period acquired from the plurality of databases, and the timed task in the current task period is the timed task with the execution time in the current task period;
    and each task execution unit acquires the timing task corresponding to the specified identity information from the corresponding database.
  3. The method according to claim 2, wherein each task execution unit acquires the identity information of the timed task from the message queue as the specified identity information, and comprises:
    the task reading units read the identity information of the timing task in the current task period from the databases;
    and each task reading unit writes the read identity information into the message queue.
  4. The method of claim 3, further comprising:
    the task service unit counts the total number of the timing tasks in each task period;
    the task reading units read the identity information of the timing task in the current task period from the databases, and the reading includes:
    for each of the task reading units it reads,
    reading the total number of timing tasks in the current task period;
    if the total number of the read timing tasks is larger than 0, reading the identity information of the timing tasks in the current task period from the corresponding database,
    and if the total number of the read timing tasks is equal to 0, waiting for the next task period.
  5. The method according to claim 3, wherein before the plurality of task reading units read the identity information of the timed task in the current task cycle from the plurality of databases, the method further comprises:
    the task scheduling unit sets a database which needs to be read by each task reading unit as a reading task of the corresponding task reading unit.
  6. The method of claim 5, further comprising:
    and if the task reading unit exits abnormally, the task scheduling unit distributes the reading task of the task reading unit to other task reading units.
  7. The method of claim 2, wherein the task serving unit distributes the plurality of timed tasks to a plurality of databases, comprising:
    the task service unit distributes the plurality of timing tasks to a plurality of databases according to a preset fragmentation algorithm;
    each task execution unit acquires a timing task corresponding to the designated identity information from the corresponding database, and the timing task comprises the following steps:
    each task execution unit determines a database stored in a timing task corresponding to the designated identity information according to a preset fragmentation algorithm;
    and acquiring a timing task corresponding to the specified identity information from the determined database.
  8. The method according to any one of claims 1-7, further comprising:
    if the timing task is deleted, the task service unit writes the identity information of the timing task into a cache;
    before each task execution unit executes the acquired timing task, the method further comprises the following steps:
    for each of the task execution units,
    judging whether the acquired timing task is a deleted timing task or not according to the identity information stored in the cache;
    if not, the task execution unit executes the timing task,
    if yes, the task execution unit abandons the execution of the timing task.
  9. The method according to any one of claims 1-8, further comprising:
    if the timing task is modified, the task service unit writes the identity information of the timing task into a cache;
    before each task execution unit executes the acquired timing task, the method further comprises the following steps:
    for each of the task execution units,
    judging whether the acquired timing task is modified or not according to the identity information cached and stored;
    if not, the task execution unit executes the timing task,
    if yes, the task execution unit acquires the timing task from the database again for execution.
  10. The method according to claim 8 or 9, characterized in that the method further comprises:
    the task service unit sets storage duration for the identity information stored in the cache;
    and when the cached identity information reaches the storage time, the task service unit deletes the identity information from the cache.
  11. The method of claim 10, wherein the storage duration is less than the task period.
  12. The method according to any one of claims 1 to 11, wherein before each task execution unit executes the acquired timing task, the method further includes:
    for each of the task execution units,
    judging whether the duration of the current time exceeding the execution time of the timing task reaches a preset threshold value or not;
    if not, the task execution unit executes the timing task,
    if yes, the task execution unit abandons the execution of the timing task and updates the next execution time of the timing task.
  13. The method according to any one of claims 2 to 7, wherein after each task execution unit executes the acquired timing task, the method further includes:
    and for the timing task which fails to execute, the task execution unit or the task service unit puts the identity information of the timing task into the message queue to continue to wait for execution.
  14. The method according to any one of claims 1 to 13, wherein after each task execution unit executes the acquired timing task, the method further includes:
    and updating the next execution time for the timing task successfully executed.
  15. The method of claim 1, further comprising:
    the task service unit counts the total number of the timing tasks in each task period;
    the task execution units acquire the timing tasks with the execution time in the current task period in the databases, and the task execution units comprise:
    for each of the task execution units,
    reading the total number of timing tasks in the current task period;
    if the total number of the read timing tasks is larger than 0, acquiring the timing tasks in the current task period from the corresponding database,
    and if the total number of the read timing tasks is equal to 0, waiting for the next task period.
  16. The method of claim 1, further comprising:
    the task scheduling unit uniformly sets a database for each task execution unit to acquire the timing task.
  17. A task processing system, comprising a task service unit, a task execution unit, and a database, wherein,
    the task service unit is used for distributing a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to an execution time;
    in each task period, a plurality of task execution units acquire timing tasks with execution time in the current task period in a plurality of databases, wherein the timing tasks read by different task execution units are different;
    and each task execution unit executes the acquired timing task.
  18. A task processing apparatus, characterized in that the apparatus comprises:
    the task service unit is used for distributing a plurality of timing tasks to a plurality of databases, wherein each timing task corresponds to one execution time;
    in each task period, the plurality of task execution units are used for acquiring timing tasks with execution time in the current task period from the plurality of databases, wherein the timing tasks read by different task execution units are different;
    each task execution unit is further configured to execute the acquired timing task.
  19. An electronic device, comprising:
    one or more processors;
    a memory;
    one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-16.
  20. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 16.
CN201980089267.3A 2019-05-16 2019-05-16 Task processing method, device and system, electronic equipment and storage medium Pending CN113302593A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/087307 WO2020228036A1 (en) 2019-05-16 2019-05-16 Task processing method and apparatus, system, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN113302593A true CN113302593A (en) 2021-08-24

Family

ID=73288940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980089267.3A Pending CN113302593A (en) 2019-05-16 2019-05-16 Task processing method, device and system, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113302593A (en)
WO (1) WO2020228036A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185667A (en) * 2022-09-13 2022-10-14 天津市天河计算机技术有限公司 Visual application acceleration method and device, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220780B (en) * 2021-04-29 2023-12-05 北京字跳网络技术有限公司 Data processing method, device, equipment and medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8234643B2 (en) * 2006-12-13 2012-07-31 Sap Ag CRON time processing implementation for scheduling tasks within a multi-tiered enterprise network
RU2543316C2 (en) * 2012-12-25 2015-02-27 Закрытое акционерное общество "Лаборатория Касперского" System and method of fail-safe execution of scheduled tasks in distributed media
CN103197969B (en) * 2013-03-27 2017-02-08 百度在线网络技术(北京)有限公司 Distributed timed task control device and method
CN105100259B (en) * 2015-08-18 2018-02-16 北京京东尚科信息技术有限公司 A kind of distributed timing task executing method and system
CN107566460B (en) * 2017-08-16 2020-06-05 微梦创科网络科技(中国)有限公司 Method and system for distributed deployment of planning tasks
CN108182108A (en) * 2017-12-19 2018-06-19 山东浪潮商用系统有限公司 A kind of timed task cluster and its execution method
CN108762911A (en) * 2018-06-13 2018-11-06 平安科技(深圳)有限公司 Timing task management method, apparatus, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185667A (en) * 2022-09-13 2022-10-14 天津市天河计算机技术有限公司 Visual application acceleration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2020228036A1 (en) 2020-11-19

Similar Documents

Publication Publication Date Title
CN108683720B (en) Container cluster service configuration method and device
CN110096336B (en) Data monitoring method, device, equipment and medium
CN110941481A (en) Resource scheduling method, device and system
EP2254049A2 (en) Job scheduling apparatus and job scheduling method
CN110677462B (en) Access processing method, system, device and storage medium for multi-block chain network
CN111104227B (en) Resource control method and device of K8s platform and related components
CN110109741B (en) Method and device for managing circular tasks, electronic equipment and storage medium
CN113302593A (en) Task processing method, device and system, electronic equipment and storage medium
CN112231108A (en) Task processing method and device, computer readable storage medium and server
CN107122271B (en) Method, device and system for recovering node event
CN112214288B (en) Pod scheduling method, device, equipment and medium based on Kubernetes cluster
CN112965817B (en) Resource management method and device and electronic equipment
CN112650566B (en) Timed task processing method and device, computer equipment and storage medium
CN111880910A (en) Data processing method and device, server and storage medium
CN111008071A (en) Task scheduling system, method and server
CN106775889B (en) Method and system for loading Flash player resources by using object pool
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
CN111158896A (en) Distributed process scheduling method and system
CN114780296A (en) Data backup method, device and system for database cluster
CN114675954A (en) Task scheduling method and device
CN114675950A (en) Task scheduling method and device
CN107168685B (en) Method and device for updating script and computer terminal
CN115168057B (en) Resource scheduling method and device based on k8s cluster
CN112579269A (en) Timed task processing method and device
CN113886040A (en) Task scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination