CN112579269A - Timed task processing method and device - Google Patents

Timed task processing method and device Download PDF

Info

Publication number
CN112579269A
CN112579269A CN202011410625.7A CN202011410625A CN112579269A CN 112579269 A CN112579269 A CN 112579269A CN 202011410625 A CN202011410625 A CN 202011410625A CN 112579269 A CN112579269 A CN 112579269A
Authority
CN
China
Prior art keywords
task
instance
identifier
timing
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011410625.7A
Other languages
Chinese (zh)
Other versions
CN112579269B (en
Inventor
刘沛峰
陈朝亮
卢道和
陈晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202011410625.7A priority Critical patent/CN112579269B/en
Priority claimed from CN202011410625.7A external-priority patent/CN112579269B/en
Publication of CN112579269A publication Critical patent/CN112579269A/en
Application granted granted Critical
Publication of CN112579269B publication Critical patent/CN112579269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a timed task processing method and a timed task processing device, wherein the method comprises the steps of acquiring a timed task, storing the timed task in a cache, distributing the timed task in the cache for each instance according to the identification of the timed task in the cache and the identification of each instance, and synchronizing the timed task into a local cache of each instance according to a preset time interval so that each instance acquires the timed task meeting a time condition from the local cache of each instance at a preset frequency for processing. When the example processes the timing task, the corresponding timing task is obtained from the local cache of the example to process, and the timing task cannot be obtained from the cache of the server in real time, so that the throughput of the server cannot be increased, and the cache performance of the server cannot be reduced.

Description

Timed task processing method and device
Technical Field
The invention relates to the technical field of financial technology (Fintech), in particular to a timed task processing method and a timed task processing device.
Background
With the development of computer technology, more and more technologies are applied in the financial field, and the traditional financial industry is gradually changing to financial technology, but due to the requirements of the financial industry on safety and real-time performance, higher requirements are also put forward on the technologies. In a customer service operation system in the financial field, how to handle a timing task is an important issue.
At present, when a timing task is processed in a customer service operation system scene, a background server mostly sets the timing task, and different timers are set through different tasks, but under the condition that the task scene is complex and the tasks are multiple, the mode of setting different timers for different tasks occupies too high system resources, and the performance of the server is greatly reduced.
Disclosure of Invention
The embodiment of the invention provides a timing task processing method and device, which are used for improving the performance of a server and reducing occupied system resources.
In a first aspect, an embodiment of the present invention provides a method for processing a timing task, including:
acquiring a timing task and storing the timing task in a cache;
distributing the timed tasks in the cache to the instances according to the identifiers of the timed tasks in the cache and the identifiers of the instances;
and synchronizing the timing tasks to the local caches of the examples according to a preset time interval so that the examples acquire the timing tasks meeting the time condition from the local caches of the examples at a preset frequency for processing.
In the technical scheme, when the instance processes the timing task, the instance acquires the corresponding timing task from the local cache of the instance to process the timing task, and the timing task cannot be acquired from the cache of the server in real time, so that the throughput of the server cannot be increased, and the cache performance of the server cannot be reduced.
Optionally, the allocating the timing task in the cache to each instance according to the identifier of the timing task in the cache and the identifier of each instance includes:
for any timing task in the cache and any instance in each instance, after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the positions of the identifier of any timing task and the identifier of any instance in a preset integer range interval are determined;
if the position of the first timing task in the cache is smaller than the position of the first instance and is closest to the position of the first instance, the first timing task is allocated to the first instance;
and if the position of the first timing task in the cache is larger than the position of any instance, allocating the first timing task to the instance closest to the starting position of the preset integer range interval.
According to the technical scheme, the tasks are allocated to the instances by judging the positions of the timing tasks in the preset integer range interval, the tasks can be allocated to the instances reasonably, the processing efficiency of the timing tasks is improved, the condition that the same timing task is processed by a plurality of instances can be avoided, and the risk that the timing tasks are allocated to the plurality of instances is reduced.
Optionally, after allocating the timing task in the cache to each instance, the method further includes:
and if any one of the instances fails, clearing the failed instance from the preset integer range interval, and reallocating the timing task allocated to the failed instance.
In the technical scheme, the timing task can be normally executed by clearing the failed instance and reallocating the timing task.
Optionally, after performing hash processing on the identifier of any timing task and the identifier of any instance, determining the positions of the identifier of any timing task and the identifier of any instance in a preset integer range interval, including:
after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the hash value of the identifier of any timing task and the hash value of the identifier of any instance are obtained;
and performing modulo operation on the hash value of the identifier of any timing task and the hash value of the identifier of any example according to the preset integer range interval to obtain the position of the identifier of any timing task and the identifier of any example in the preset integer range interval.
Among the above-mentioned technical scheme, through carrying out the mark and carrying out the modulus after the hash calculation, can be with injecing in the interval of preset integer range of the mark after the modulus, through the mark after the contrast modulus and the interval of preset integer range, can fix a position the position of mark in the interval of preset integer range fast.
Optionally, the timing task includes an identifier, time, and task content;
the storing the timed task in a cache includes:
and respectively storing the time of the timed task in the first cached queue and storing the task content of the timed task in the second cached queue according to the identifier of the timed task.
In the above technical solution, the time of the timing task and the task content are respectively stored in the two queues, so that the storage structure in the cache can be simplified. Because each queue is stored by the identifier of the timed task, the time and the task content of the timed task can be quickly found according to the identifier of the timed task, and the example can be conveniently and quickly read.
Optionally, the timing task meeting the time condition is a timing task whose time is less than a timestamp of the current time.
In the technical scheme, the timing task is executed by reading the timing task with the timestamp less than the current time, so that the efficiency of searching the timing task can be improved.
Optionally, the local cache of each instance is a distributed database.
By using a distributed database to have the timed task, the storage capacity of the timed task can be increased.
In a second aspect, an embodiment of the present invention provides a timed task processing apparatus, including:
the device comprises an acquisition unit, a cache and a processing unit, wherein the acquisition unit is used for acquiring a timing task and storing the timing task in the cache;
the processing unit is used for distributing the timing tasks in the cache to the instances according to the identifications of the timing tasks in the cache and the identifications of the instances; and synchronizing the timing tasks to the local caches of the examples according to a preset time interval so that the examples acquire the timing tasks meeting the time condition from the local caches of the examples at a preset frequency for processing.
Optionally, the processing unit is specifically configured to:
for any timing task in the cache and any instance in each instance, after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the positions of the identifier of any timing task and the identifier of any instance in a preset integer range interval are determined;
if the position of the first timing task in the cache is smaller than the position of the first instance and is closest to the position of the first instance, the first timing task is allocated to the first instance;
and if the position of the first timing task in the cache is larger than the position of any instance, allocating the first timing task to the instance closest to the starting position of the preset integer range interval.
Optionally, the processing unit is further configured to:
after the timed tasks in the cache are distributed to the instances, if any instance in the instances fails, the failed instance is removed from the preset integer range interval, and the timed tasks distributed to the failed instance are redistributed.
Optionally, the processing unit is specifically configured to:
after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the hash value of the identifier of any timing task and the hash value of the identifier of any instance are obtained;
and performing modulo operation on the hash value of the identifier of any timing task and the hash value of the identifier of any example according to the preset integer range interval to obtain the position of the identifier of any timing task and the identifier of any example in the preset integer range interval.
Optionally, the timing task includes an identifier, time, and task content;
the obtaining unit is specifically configured to:
and respectively storing the time of the timed task in the first cached queue and storing the task content of the timed task in the second cached queue according to the identifier of the timed task.
Optionally, the timing task meeting the time condition is a timing task whose time is less than a timestamp of the current time.
Optionally, the local cache of each instance is a distributed database.
In a third aspect, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the timed task processing method according to the obtained program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable non-volatile storage medium, which includes computer-readable instructions, and when the computer-readable instructions are read and executed by a computer, the computer is caused to execute the above timed task processing method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for processing a timed task according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a timing task allocation according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a timing task allocation according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a timing task allocation according to an embodiment of the present invention;
FIG. 6 is a timing task synchronization diagram according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a queue according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an embodiment of obtaining time;
FIG. 9 is a diagram illustrating task content acquisition according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a timed task processing device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a system architecture provided in an embodiment of the present invention. As shown in fig. 1, the system architecture may be a server 100, and the server 100 may include a processor 110, a communication interface 120, and a memory 130.
The communication interface 120 is used for communicating with a terminal device, and transceiving information transmitted by the terminal device to implement communication.
The processor 110 is a control center of the server 100, connects various parts of the entire server 100 using various interfaces and lines, performs various functions of the server 100 and processes data by running or executing software programs and/or modules stored in the memory 130 and calling data stored in the memory 130. Alternatively, processor 110 may include one or more processing units.
The memory 130 may be used to store software programs and modules, and the processor 110 executes various functional applications and data processing by operating the software programs and modules stored in the memory 130. The memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to a business process, and the like. Further, the memory 130 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
It should be noted that the structure shown in fig. 1 is only an example, and the embodiment of the present invention is not limited thereto.
In the current scene of a customer service operation system, a front end periodically polls and calls a certain service method in a polling mode to realize a service function. However, the method has the problems that the polling frequency is too high, the server load is easily too high, and the system is down.
Based on the above description, to solve the above problems. Fig. 2 shows in detail a flow of a timed task processing method according to an embodiment of the present invention, where the flow may be executed by a device of the timed task processing method, and the device may be the server or located in the server.
As shown in fig. 2, the process specifically includes:
step 201, acquiring a timing task, and storing the timing task in a cache.
In the embodiment of the invention, the timing task comprises identification, time and task content, and the task content can be a task callback method. Since the timing task includes time and task content, occupies multiple bits of data, and directly stores the timing task in the buffer, the buffer data may be too large, and the reading rate is slow when the timing task is read. To solve this technical problem, the embodiments of the present invention buffer the timing task in the following manner. In particular, the timed task may be stored in two queues in the buffer. According to the identification of the timed task, the time of the timed task is stored in the first queue of the buffer, and the task content of the timed task is stored in the second queue of the buffer. The cache is a server cache, which may also be a remote cache as opposed to the local cache of the instance itself. The local cache may be a distributed database, and by using the distributed database to store the timing task, the storage capacity of the timing task can be increased.
By storing the time of the timed task and the task content in the two queues respectively, the storage structure in the buffer can be simplified. Because each queue is stored by the identifier of the timed task, the time and the task content of the timed task can be quickly found according to the identifier of the timed task, and the example can be conveniently and quickly read.
For example, as shown in fig. 3, after the timing task is generated, the server may store the task that needs to be executed by the timing task (CALLBACK) and the TIME of execution of the Task (TIME) in two different queues in the buffer respectively, and ensure that KEY values of the two queues are consistent. Wherein KEY is an identification of the timing task.
As shown in fig. 3, the time point of task execution is stored in queue 1: KEY is the unique identification, and VALUE is the time stamp of the time of task execution.
The queue 2 stores therein the task content of the task to be executed: KEY is a unique identifier, and VALUE is a task callback method.
The task execution time is consistent with the corresponding task content, and the KEY values in different queues are used for matching.
Step 202, distributing the timed tasks in the cache to the instances according to the identifications of the timed tasks in the cache and the identifications of the instances.
In order to avoid the problem that the same timing task may be executed for multiple times when the timing task is allocated, the timing task in the local cache is executed by each instance, and after the identifier of any timing task and the identifier of any instance are hashed, the location of the identifier of any timing task and the location of the identifier of any instance in the preset integer range interval are determined for any timing task and any instance in each instance. And then determining the location of each timing task and instance. And if the position of the first timing task in the cache is smaller than the position of the first instance and is closest to the position of the first instance, the first timing task is allocated to the first instance. And if the position of the first timing task in the cache is larger than the position of any instance, allocating the first timing task to the instance closest to the starting position of the preset integer range interval. The predetermined integer range may be set empirically. The first timing task is any unallocated timing task in the cache.
The tasks are allocated to the instances by judging the positions of the timing tasks in the preset integer range interval, so that the tasks can be allocated to the instances reasonably, the processing efficiency of the timing tasks is improved, the condition that the same timing task is processed by a plurality of instances can be avoided, and the risk that the timing tasks are allocated to the plurality of instances is reduced.
After the identifier of any timing task and the identifier of any instance in the cache are subjected to hash processing, and the location of the identifier of any timing task and the identifier of any instance in a preset integer range interval is determined, the identifier of any timing task and the identifier of any instance may be subjected to hash processing to obtain a hash value of the identifier of any timing task and a hash value of the identifier of any instance, and then according to the preset integer range interval, modulo operation is performed on the hash value of the identifier of any timing task and the hash value of the identifier of any instance to obtain the location of the identifier of any timing task and the location of the identifier of any instance in the preset integer range interval.
By carrying out the modulus after the mark is subjected to the Hash calculation, the mark after the modulus can be limited in the range interval of the preset integer, and the position of the mark in the range interval of the preset integer can be quickly positioned by comparing the mark after the modulus with the range interval of the preset integer.
For example, a predetermined integer range interval is first defined, such as an interval of 0-2^ 32. The range of integers within this interval is from 0 to 4294967296.
Then, the location of the timed task and the instance is calculated. Specifically, a Hash value of each instance name is calculated, and a modulo operation is performed on the Hash value, where the modulo operation is represented by the following formula (1), so as to obtain an integer. The integer is in the range of 0-2^32, and the value is the position of the instance in the preset integer range.
Hash (instance name)% 2^32 (1);
and calculating the hash value of the KEY value of each timing task in the queue, and carrying out modular operation on the hash value to obtain an integer. Again this integer falls within a range of preset integers.
Hash (the value of KEY in the queue for timed tasks)% 2^32 (2).
And after the positions of the timing tasks and the instances are obtained, the timing tasks can be distributed for the instances, and the timing tasks find the instance which is larger than the position value and is closest to the position value according to the position of the integer corresponding to the Hash value of the timing tasks. The timing task is performed by the instance.
And if the position of the task corresponding to the Hash value in the defined interval is not larger than the instance position of the task, starting from the initial position of the defined interval, and executing the task from the instance newly finding the first position. As shown in FIG. 4, the location of task C is larger than that of instance 4, and within the range of 0 to 2^32, there is no instance larger than task C, so the closest instance to 0 is needed to execute task C, i.e. the first instance, which can be seen in FIG. 4 as instance 1, and thus task C is allocated to instance 1.
Further, sometimes a certain instance fails, there is a problem that the task assigned to this instance cannot be executed, and therefore, the problem is solved. When any one of the instances fails, the failed instance can be cleared from the preset integer range interval, and then the timing task allocated to the failed instance is redistributed. I.e. continue to find the closest instance that is located larger than the timed task assigned to the failed instance. For example, if a certain instance fails, the position of the instance is removed within the preset integer range, and the task also finds the instance which is larger than the instance and is closest to the instance to execute the task. If example 3 in fig. 4 fails, it is necessary to continue to find an example that is larger than and closest to task B, as shown in fig. 5, example 4 can be found, and thus task B is executed by example 4.
Step 203, synchronizing the timing tasks distributed for the examples to the local caches of the examples according to a preset time interval.
If the timing tasks in the local caches in the respective instances are inconsistent, there is a possibility that the timing tasks processed by the instances cannot be executed normally when a certain instance fails. In order to solve the problem, the server needs to synchronize the timing tasks allocated to the instances into the local caches of the instances according to the preset time interval, so that the instances acquire the timing tasks meeting the time condition from the local caches of the instances at the preset frequency for processing. The predetermined time interval and the predetermined frequency may be set empirically, wherein the predetermined time interval is greater than the interval in the predetermined frequency. The timing task meeting the time condition is a timing task of which the time is less than the time stamp of the current time. The embodiment realizes the execution of the timing task by reading the timing task with the timestamp less than the current time, and can improve the efficiency of searching the timing task.
The server synchronizes the timing tasks distributed for the instances to the local caches of the instances according to the preset time interval, so that the consistency of the data in the local caches of the instances can be ensured.
As shown in fig. 6, when a task is called, the example polls the local cache every second to obtain the task to be executed. Because each instance calls the local cache of the instance, the cache of the server cannot be read, and therefore the TPS cannot be promoted to reduce the cache performance of the server.
The remote cache and the local cache synchronize data once per minute to ensure that the data in the local caches of all the instances are consistent, and when a certain instance is unavailable, the timing task can be normally executed. Here, the remote cache is a cache of the server.
When the instances acquire the tasks in the local cache, the instances can acquire the tasks according to the time stamp of the current time, and the timer program of each instance fetches the corresponding task callback method to be executed from the cache every second. As shown in fig. 7, the data structure of the queue 1 is constructed as an ordered set, and the sets are arranged from small to large according to the SCORE attribute defined in the data structure, so that the positions in the queue can be conveniently and quickly found. Members in an ordered set are unique, but a SCORE may be repeated. Where the SCORE attribute is expressed in milliseconds starting from 1/1970, 1/0, 0 minutes, 0 seconds to the time of task plan execution.
The manner in which the timer program on each instance obtains the corresponding KEY from the cache queue every second may be as shown in fig. 8, with the timer program first obtaining the timestamp of the current time (the number of milliseconds from 1970 to the present). And comparing the acquired time stamp with the SCORE in the queue 1 to find out the element of which the SCORE is smaller than the time stamp.
As shown in FIG. 9, those elements with SCORE less than the timestamp are fetched from queue 1. The elements whose KEY values are equal to those of the elements are selected from the queue 2.
By the method, the cache performance of the server cannot be reduced due to the fact that the number of the instances is larger under the distributed high-concurrency condition. Thereby ensuring the performance and high availability of the server.
After the task content is obtained, the instance can perform deserialization on the task content. The state of the object is stored persistently by using Java tool classes, so that the identical copy can be created again at a certain later time point conveniently. The object may also be transferred to other modules of a distributed system, such as for Remote Procedure Calls (RPC).
The method taken out of the set is deserialized. And starts the thread to execute the task method. Java native deserialization is achieved by ObjectInputStream. The deserialization is done by the ObjectInputStream class in Java, which reads byte sequences from a source input stream, deserializes them into an object, and returns them. And executing the object method after returning.
By the timing task processing method provided by the embodiment, no matter how many complex timer scenes exist in the customer service scene, the time efficiency, the performance and the high availability can be well ensured.
In the embodiment of the invention, the timing task is obtained and stored in the cache, the timing task in the cache is allocated to each instance according to the identifier of the timing task in the cache and the identifier of each instance, and the timing task is synchronized into the local cache of each instance according to the preset time interval, so that each instance obtains the timing task meeting the time condition from the local cache of each instance at the preset frequency for processing. When the example processes the timing task, the corresponding timing task is obtained from the local cache of the example to process, and the timing task cannot be obtained from the cache of the server in real time, so that the throughput of the server cannot be increased, and the cache performance of the server cannot be reduced.
Based on the same technical concept, fig. 10 exemplarily shows a structure of a timed task processing device provided by an embodiment of the present invention, and the device can execute a timed task processing flow.
As shown in fig. 10, the apparatus specifically includes:
an obtaining unit 1001 configured to obtain a timing task and store the timing task in a cache;
the processing unit 1002 is configured to allocate the timing task in the cache to each instance according to the identifier of the timing task in the cache and the identifier of each instance; and synchronizing the timing tasks to the local caches of the examples according to a preset time interval so that the examples acquire the timing tasks meeting the time condition from the local caches of the examples at a preset frequency for processing.
Optionally, the processing unit 1002 is specifically configured to:
for any timing task in the cache and any instance in each instance, after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the positions of the identifier of any timing task and the identifier of any instance in a preset integer range interval are determined;
if the position of the first timing task in the cache is smaller than the position of the first instance and is closest to the position of the first instance, the first timing task is allocated to the first instance;
and if the position of the first timing task in the cache is larger than the position of any instance, allocating the first timing task to the instance closest to the starting position of the preset integer range interval.
Optionally, the processing unit 1002 is further configured to:
after the timed tasks in the cache are distributed to the instances, if any instance in the instances fails, the failed instance is removed from the preset integer range interval, and the timed tasks distributed to the failed instance are redistributed.
Optionally, the processing unit 1002 is specifically configured to:
after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the hash value of the identifier of any timing task and the hash value of the identifier of any instance are obtained;
and performing modulo operation on the hash value of the identifier of any timing task and the hash value of the identifier of any example according to the preset integer range interval to obtain the position of the identifier of any timing task and the identifier of any example in the preset integer range interval.
Optionally, the timing task includes an identifier, time, and task content;
the obtaining unit 1001 is specifically configured to:
and respectively storing the time of the timed task in the first cached queue and storing the task content of the timed task in the second cached queue according to the identifier of the timed task.
Optionally, the timing task meeting the time condition is a timing task whose time is less than a timestamp of the current time.
Optionally, the local cache of each instance is a distributed database.
Based on the same technical concept, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the timed task processing method according to the obtained program.
Based on the same technical concept, embodiments of the present invention also provide a computer-readable non-volatile storage medium, which includes computer-readable instructions, and when the computer reads and executes the computer-readable instructions, the computer is caused to execute the above timed task processing method.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for processing a timed task, comprising:
acquiring a timing task and storing the timing task in a cache;
distributing the timed tasks in the cache to the instances according to the identifiers of the timed tasks in the cache and the identifiers of the instances;
and synchronizing the timing tasks to the local caches of the examples according to a preset time interval so that the examples acquire the timing tasks meeting the time condition from the local caches of the examples at a preset frequency for processing.
2. The method of claim 1, wherein the allocating the timed task in the cache to each instance according to the identifier of the timed task in the cache and the identifier of each instance comprises:
for any timing task in the cache and any instance in each instance, after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the positions of the identifier of any timing task and the identifier of any instance in a preset integer range interval are determined;
if the position of the first timing task in the cache is smaller than the position of the first instance and is closest to the position of the first instance, the first timing task is allocated to the first instance;
and if the position of the first timing task in the cache is larger than the position of any instance, allocating the first timing task to the instance closest to the starting position of the preset integer range interval.
3. The method of claim 2, wherein after the allocating the timed task in the cache to the instances, further comprises:
and if any one of the instances fails, clearing the failed instance from the preset integer range interval, and reallocating the timing task allocated to the failed instance.
4. The method according to claim 2, wherein the determining the position of the identifier of any timing task and the identifier of any instance in a preset integer range interval after the hash processing is performed on the identifier of any timing task and the identifier of any instance comprises:
after the identifier of any timing task and the identifier of any instance are subjected to hash processing, the hash value of the identifier of any timing task and the hash value of the identifier of any instance are obtained;
and performing modulo operation on the hash value of the identifier of any timing task and the hash value of the identifier of any example according to the preset integer range interval to obtain the position of the identifier of any timing task and the identifier of any example in the preset integer range interval.
5. The method of claim 1, wherein the timed task includes an identification, a time, and a task content;
the storing the timed task in a cache includes:
and respectively storing the time of the timed task in the first cached queue and storing the task content of the timed task in the second cached queue according to the identifier of the timed task.
6. The method of claim 5, wherein the timed task meeting the time condition is a timed task whose time is less than the current time.
7. The method of any of claims 1 to 6, wherein the local cache of each instance is a distributed database.
8. A timed task processing apparatus, comprising:
the device comprises an acquisition unit, a cache and a processing unit, wherein the acquisition unit is used for acquiring a timing task and storing the timing task in the cache;
the processing unit is used for distributing the timing tasks in the cache to the instances according to the identifications of the timing tasks in the cache and the identifications of the instances; and synchronizing the timing tasks to the local caches of the examples according to a preset time interval so that the examples acquire the timing tasks meeting the time condition from the local caches of the examples at a preset frequency for processing.
9. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to perform the method of any of claims 1 to 7 in accordance with the obtained program.
10. A computer-readable non-transitory storage medium including computer-readable instructions which, when read and executed by a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN202011410625.7A 2020-12-04 Timing task processing method and device Active CN112579269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011410625.7A CN112579269B (en) 2020-12-04 Timing task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011410625.7A CN112579269B (en) 2020-12-04 Timing task processing method and device

Publications (2)

Publication Number Publication Date
CN112579269A true CN112579269A (en) 2021-03-30
CN112579269B CN112579269B (en) 2024-07-02

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018036167A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Test task executor assignment method, device, server and storage medium
CN111782365A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Timed task processing method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018036167A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Test task executor assignment method, device, server and storage medium
CN111782365A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Timed task processing method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周智;: "Redis分布式缓存实现与解析", 信息通信, no. 06, 15 June 2018 (2018-06-15) *

Similar Documents

Publication Publication Date Title
WO2020211579A1 (en) Processing method, device and system for distributed bulk processing system
CN110941481A (en) Resource scheduling method, device and system
CN110247984B (en) Service processing method, device and storage medium
CN115328663A (en) Method, device, equipment and storage medium for scheduling resources based on PaaS platform
US10838842B2 (en) Method and system of monitoring a service object
CN111858055B (en) Task processing method, server and storage medium
CN110895483A (en) Task recovery method and device
CN111399764A (en) Data storage method, data reading device, data storage equipment and data storage medium
CN112231108A (en) Task processing method and device, computer readable storage medium and server
CN110990415A (en) Data processing method and device, electronic equipment and storage medium
CN110895488A (en) Task scheduling method and device
CN109819674B (en) Computer storage medium, embedded scheduling method and system
CN113626173A (en) Scheduling method, device and storage medium
CN110955461B (en) Processing method, device, system, server and storage medium for computing task
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN112631994A (en) Data migration method and system
CN111431951B (en) Data processing method, node equipment, system and storage medium
WO2020228036A1 (en) Task processing method and apparatus, system, electronic device, and storage medium
CN109558254B (en) Asynchronous callback method, system, device and computer readable storage medium
CN112579269B (en) Timing task processing method and device
CN112579269A (en) Timed task processing method and device
CN107958414B (en) Method and system for eliminating long transactions of CICS (common integrated circuit chip) system
CN113626399B (en) Data synchronization method, device, server and storage medium
CN114756527A (en) Method and device for expanding Redis cluster, electronic equipment and storage medium
CN113901076A (en) Data synchronization method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant