CN114063936B - Method, system, equipment and storage medium for optimizing timing task - Google Patents
Method, system, equipment and storage medium for optimizing timing task Download PDFInfo
- Publication number
- CN114063936B CN114063936B CN202210051519.7A CN202210051519A CN114063936B CN 114063936 B CN114063936 B CN 114063936B CN 202210051519 A CN202210051519 A CN 202210051519A CN 114063936 B CN114063936 B CN 114063936B
- Authority
- CN
- China
- Prior art keywords
- message
- task
- queue
- timing task
- timing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000003860 storage Methods 0.000 title claims abstract description 33
- 238000007781 pre-processing Methods 0.000 claims abstract description 49
- 230000004044 response Effects 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 238000005096 rolling process Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 abstract description 28
- 230000008569 process Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0652—Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Abstract
The invention provides a method, a system, equipment and a storage medium for optimizing a timing task, wherein the method comprises the following steps: starting a plurality of working threads to monitor and consume the timed task messages, and binding the working threads with the corresponding message queue index subscripts; pulling a timing task message corresponding to the index subscript of the release queue from a task cache queue, moving the timing task message to a preprocessing cache queue, and sending the timing task message to a service end according to an event identifier; responding to the service end receiving the timing task message and the consumption is successful, and the service end sending a transaction submitting instruction; and in response to a successful transaction commit, deleting the timed task message from the pre-processing cache queue. The invention reduces the load pressure of the storage main node, reduces the system processing time delay and improves the concurrency performance.
Description
Technical Field
The present invention relates to the field of distributed storage systems, and more particularly, to a method, system, device, and storage medium for optimizing a timing task.
Background
With the large-scale popularization and application of digitalization, the distributed storage system as a bottom layer support grows vigorously like spring shoots after rain, and a timing task and related optimization thereof become important components in the complete life cycle of a stored product. Generally, the following processing flow is mostly adopted for the timing task processing: persistently storing the timed task in a timed task database; and the process body uses a timer to scan the database at regular time, the regular task is put in a memory queue of the node after finding the task to be processed, the regular task is waited to be handed to a specific business processing module for consumption processing, and the task state is updated after the business processing module finishes processing. However, when the whole machine room is managed by the large-scale storage cluster, the following problems exist in the processing flow:
(1) timing task timing scheduling generally scans at fixed time intervals, and the interval time lengthens the delay time of processing the timing task; particularly, when the cluster nodes are large in scale, the timing task scheduling process in the mutual dependency relationship is complicated, the abnormal problem is prominent, and the delay performance is obvious.
(2) To avoid scanning data repeatedly, timer process services are generally stored independently in each node of the cluster, and each node corresponds to a timed task table, and concurrent processing capability of timed tasks is limited by processing of timed task table data on a single node. When the database list table is huge in data amount, it takes a long time to scan all timing tasks to be processed from the data table.
(3) In a node fault scene in a distributed large-scale cluster, the flow has weak fault handling capability, cannot automatically process and recover faults from the inside, needs to independently process the fault scene by means of an external script and another process, has poor fault tolerance and weak fault handling capability, and is easy to have large delay under the fault when the task quantity to be processed is large.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a method, a system, a computer device, and a computer-readable storage medium for optimizing a timing task, in which a timer logic in a conventional mode is dispersed in each MON (monitor) node cache of a distributed storage cluster, a timing task preprocessing is adopted to implement a two-stage submission and two-stage consumption verification function similar to a database transaction in the cache logic, subscription and scheduling of the timing task are completed in each MON node, and a final decision result is sent to a cluster master node to provide decision processing and front-end display, and each module optimizes a cluster concurrency processing capability, thereby improving the functional integrity of the cluster timing task.
Based on the above object, an aspect of the embodiments of the present invention provides a method for optimizing a timing task, including the following steps: starting a plurality of working threads to monitor and consume the timed task messages, and binding the working threads with the corresponding message queue index subscripts; pulling a timing task message corresponding to the index subscript of the release queue from a task cache queue, moving the timing task message to a preprocessing cache queue, and sending the timing task message to a service end according to an event identifier; responding to the service end receiving the timing task message and the consumption is successful, and the service end sending a transaction submitting instruction; and in response to a successful transaction commit, deleting the timed task message from the pre-processing cache queue.
In some embodiments, the method further comprises: responding to the condition that the service end does not receive the timing task message, and waiting for the corresponding timing task in the preprocessing cache queue to reach the preset time; performing transaction rollback, moving the timing task message from the preprocessing buffer queue to the task buffer queue, and reducing the retry number by one; and responding to the condition that the retry times are not returned to zero, moving the timing task message from the task buffer queue to the preprocessing buffer queue again, and sending the timing task message to a service end again.
In some embodiments, the method further comprises: and responding to the return-to-zero retry times, moving the timed task message to an overtime buffer queue, and informing the front end that the timed task fails.
In some embodiments, the waiting for the corresponding timing task in the pre-processing buffer queue to reach the predetermined time includes: and calculating the difference value of the current timestamp and the enqueue timestamp of the timing task, and comparing the difference value with the preset time.
In some embodiments, the method further comprises: responding to the condition that the service end does not successfully consume the timed task message, and sending a transaction rollback instruction by the service end; and in response to the transaction not rolling back successfully, waiting for the corresponding timing task in the preprocessing buffer queue to reach the preset time.
In some embodiments, the binding the worker thread with the corresponding message queue index subscript includes: and (4) performing hash complementation on the message body formed by combining the service data types of the timing task message to obtain a message queue index subscript.
In some embodiments, the method further comprises: and generating a registration subscription relationship according to the configuration parameters to form a new timing task, and writing the new timing task into the task cache queue.
In another aspect of the embodiments of the present invention, a system for optimizing a timing task is provided, including: the binding module is configured for starting a plurality of working threads to monitor and consume the timed task messages, and binding the working threads with the corresponding message queue index subscripts; the pull module is configured to pull the timing task message corresponding to the message queue index subscript from the task cache queue, move the timing task message to the preprocessing cache queue, and send the timing task message to the service end according to the event identifier; the submitting module is configured to respond to the fact that the service end receives the timed task message and the consumption is successful, and the service end sends a transaction submitting instruction; and the deleting module is configured to delete the timed task message from the preprocessing cache queue in response to the transaction submission success.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method as above.
In a further aspect of the embodiments of the present invention, a computer-readable storage medium is also provided, in which a computer program for implementing the above method steps is stored when the computer program is executed by a processor.
The invention has the following beneficial technical effects:
(1) the load pressure of a storage main node is greatly reduced, a timed task cache logical framework avoids large-batch database scanning and table scanning data operation, the system processing time delay is reduced, and the user experience and the processing performance are improved under the condition that the functions are not influenced;
(2) the timing task is processed in a messaging mode, the processing capacity of the system task message is enhanced by the powerful message subscribing, distributing and processing capacity of an asynchronous message channel, the concurrent processing capacity of the timing task depends on a cache queue and the number of working threads, configuration and dynamic setting are easy, the concurrent limit is broken through, and the concurrent performance is improved;
(3) easy to implement in large-scale storage clusters. The logic framework is layered and distributed, service management is flattened, the operation logic and management functions are more perfect and abundant, and the integrity and the usability of the whole development framework are facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1 is a schematic diagram of an embodiment of a method for optimizing a timing task provided by the present invention;
FIG. 2 is a flow chart of an embodiment of a method for optimizing a timing task provided by the present invention;
FIG. 3 is a schematic diagram of an embodiment of a system for optimizing timing tasks provided by the present invention;
FIG. 4 is a schematic hardware structure diagram of an embodiment of a computer device for optimizing a timing task provided by the present invention;
FIG. 5 is a schematic diagram of an embodiment of a computer storage medium for optimizing timing tasks provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In a first aspect of an embodiment of the present invention, an embodiment of a method for optimizing a timing task is provided. Fig. 1 is a schematic diagram illustrating an embodiment of a method for optimizing a timing task provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, starting a plurality of working threads to monitor and consume the timing task messages, and binding the working threads with the corresponding message queue index subscripts;
s2, pulling a timing task message corresponding to the index subscript of the message queue from the task cache queue, moving the timing task message to a preprocessing cache queue, and sending the timing task message to a service end according to an event identifier;
s3, responding to the fact that the service end receives the timing task message and consumption is successful, the service end sends a transaction submitting instruction; and
and S4, in response to the successful transaction submission, deleting the timed task message from the preprocessing buffer queue.
The embodiment of the invention adds a timed task cache configuration module, a timed task preprocessing module and a timed task scheduling consumption module in a distributed storage system to optimize timed task production, submission, overtime consumption and fault processing in a large-scale cluster, disperses the logic of a timer in a traditional mode in each MON node cache of the distributed storage cluster, adopts timed task preprocessing to realize two-stage submission and two-stage consumption verification functions similar to database transactions in the cache logic, completes the subscription and scheduling of timed tasks at each MON node, sends a final decision result to a cluster main node to provide decision processing and front-end display, optimizes cluster concurrency processing capacity by each module, and improves the functional integrity of cluster timed tasks. MON node: for example, a distributed cluster has 200 nodes, wherein the first 7 nodes are MON nodes, and only one of the 7 MON nodes can be elected as a master MON node, i.e., a cluster master node.
Three task message buffer queues (buffer Queue is generally realized by a Queue data structure and the length is generally the nth power of 2) of a task buffer Queue (TCQ), a task preprocessing buffer Queue (PCQ) and a task overtime buffer Queue (DCQ) exist in the timing task buffer configuration module, each timing task entity is appointed to be message data (which can be realized by common structural bodies and packaging types) in a fixed data transmission format, and the timing task message data is stored in the queues. Timing tasks can be configured and created in storage management software according to service types, the configuration parameters of the timing tasks are as shown in table 1 (database table records), an internal logic layer appoints that task message bodies of all modules use uniform format fields, and uniform timing task definitions and registration interfaces (such as external interfaces including CLI, REST API and the like) are provided for the outside.
TABLE 1 timed task Table data Structure
The detailed structure definition of the table and the functional service event table therein is not specifically restricted and can be realized according to each party; the timed task cache configuration module is usually implemented by a high-performance K-V database, such as Redis, and can be implemented according to each party.
Fig. 2 is a flowchart of an embodiment of a method for optimizing a timing task provided by the present invention, and the embodiment of the present invention is described with reference to fig. 2.
And starting a plurality of working threads to monitor and consume the timed task messages, and binding the working threads with the corresponding message queue index subscripts.
The timing task scheduling consumption module starts a plurality of working thread Workers to monitor and consume timing task messages, the timing messages are stored in a cache queue of the timing task configuration module and are responsible for maintaining the relation between the working thread Worker of each node in the cluster and the Index of the message queue, and one Worker thread can consume a plurality of Index message data. The relationship between the Worker and the Index is redistributed when the cluster node is started and stopped, and after the relationship between the Worker thread and the Index is determined, the subscribed timed task message in the Index is cyclically pulled from the cache queue. In the distributed system, the module selects one node as an executive body of the timing task according to the load condition of each MON node, and task result summarizing and consumption are carried out. Its scheduling policy is not specifically constrained.
In some embodiments, the binding the worker thread with the corresponding message queue index subscript includes: and (4) performing hash complementation on the message body formed by combining the service data types of the timing task message to obtain a message queue index subscript. The position of each timing task message in the queue is the index subscript of the queue, and then: the message positions of the three queues are denoted as TCQ _ index, PCQ _ index and DCQ _ index, respectively. And the message position operation is obtained by Hash complementation of message bodies combined according to the service data types.
In some embodiments, the method further comprises: and generating a registration subscription relationship according to the configuration parameters to form a new timing task, and writing the new timing task into the task cache queue. And when a timing task is newly added, a registration subscription relation is directly generated to the timing task cache configuration module according to the configuration parameters, and the next module is further processed.
And pulling the timing task message corresponding to the index subscript of the release queue from the task cache queue, moving the timing task message to a preprocessing cache queue, and sending the timing task message to a service end according to the event identifier. After the task message data of the cache queue are dequeued and enqueued, the timing task is compared according to the timing time set by the message data and the timestamp of the current time, the corresponding timing task event _ id (event identifier) is popped from the cache queue, the specific timing task detail is obtained according to the event _ id and is delivered to the corresponding service module for consumption processing, and finally the state of the task database is updated.
And responding to the service end receiving the timing task message and the consumption is successful, and the service end sending a transaction submission instruction. And in response to the transaction submission being successful, deleting the timed task message from the pre-processing cache queue.
The timed task preprocessing module mainly realizes a task message data interaction flow among three cache queues in the timed task cache configuration module, in order to ensure that each timed task message is consumed at least once, a user request or a request inside the system initiates timed task consumption, namely, a message data is not directly taken out from a database, but a message data element is moved from a task cache queue TCQ to a preprocessing queue PCQ and returned to a consumer, and the message is deleted from the PCQ queue after successful consumption or is moved from the PCQ queue to the TCP queue after failed consumption, which is similar to a task message two-stage consumption logic realized according to the idea of database transaction two-stage submission. When the task message is consumed unsuccessfully or the consumption end does not process the task message, the task message retries the message consumption flow according to a certain number of retry failures, when the set retry failures are exhausted and the task is not completed yet, the message is put into the DCQ queue, the task message is marked to be failed and is not put into the DCQ queue any more, and the user can process the failed task according to the front end presentation of the DCQ queue. The modules are mainly dispersed on each MON node of the distributed cluster, and each internal module transmits task message data through a message channel.
In some embodiments, the method further comprises: responding to the condition that the service end does not receive the timing task message, and waiting for the corresponding timing task in the preprocessing cache queue to reach the preset time; performing transaction rollback, moving the timing task message from the preprocessing buffer queue to the task buffer queue, and reducing the retry number by one; and responding to the condition that the retry times are not returned to zero, moving the timing task message from the task buffer queue to the preprocessing buffer queue again, and sending the timing task message to a service end again. In some embodiments, the method further comprises: and responding to the return-to-zero retry times, moving the timed task message to an overtime buffer queue, and informing the front end that the timed task fails. In order to avoid the contingency of one task failure, the retry number can be set, the consumption can be tried again after the task consumption fails, and if the failure is proved to be true in multiple times, the alarm is given.
In some embodiments, the waiting for the corresponding timing task in the pre-processing buffer queue to reach the predetermined time includes: and calculating the difference value of the current timestamp and the enqueue timestamp of the timing task, and comparing the difference value with the preset time. And setting preset time for each timing task, and moving the timing task to the task buffer queue if the timing task is not consumed after the preset time is reached. It may be determined whether a predetermined time is reached based on the current timestamp and the enqueue timestamp, for example, the predetermined time is 1 minute, and if the difference between the current timestamp and the enqueue timestamp reaches 1 minute, the timed task may be moved to the task buffer queue.
In some embodiments, the method further comprises: responding to the condition that the service end does not successfully consume the timed task message, and sending a transaction rollback instruction by the service end; and in response to the transaction not rolling back successfully, waiting for the corresponding timing task in the preprocessing buffer queue to reach the preset time.
The embodiment of the invention has the following beneficial effects:
(1) after the timing task configuration module, the preprocessing module and the scheduling consumption configuration are adopted, the load pressure of a storage main node is greatly reduced, the timing task cache logical framework avoids large-batch database and table scanning data operation, the system processing time delay is reduced, and the user experience and the processing performance are improved under the condition that the functions are not influenced;
(2) the three modules supplement each other to process the timing task in a messaging way, the strong message subscribing, distributing and processing capabilities of an asynchronous message channel enhance the processing capability of system task messages, the concurrent processing capability of the timing task depends on a cache queue and the number of working threads, the configuration and the dynamic setting are easy, the concurrent limitation is broken through, and the concurrent performance is improved;
(3) easy to implement in large-scale storage clusters. The three modules are independent from the cluster service module, and by associating the service system through Event _ id, the logic framework is layered and distributed, service management is flattened, the operation logic and management functions are more perfect and abundant, and the integrity and the usability of the whole development framework are facilitated.
It should be particularly noted that, the steps in the embodiments of the method for optimizing a timing task may be mutually intersected, replaced, added, or deleted, and therefore, these methods for optimizing a timing task, which are reasonably transformed by permutation and combination, should also belong to the scope of the present invention, and should not limit the scope of the present invention to the embodiments.
In view of the above, a second aspect of the embodiments of the present invention provides a system for optimizing a timing task. As shown in fig. 3, the system 200 includes the following modules: the binding module is configured for starting a plurality of working threads to monitor and consume the timed task messages, and binding the working threads with the corresponding message queue index subscripts; the pull module is configured to pull the timing task message corresponding to the message queue index subscript from the task cache queue, move the timing task message to the preprocessing cache queue, and send the timing task message to the service end according to the event identifier; the submitting module is configured to respond to the fact that the service end receives the timed task message and the consumption is successful, and the service end sends a transaction submitting instruction; and the deleting module is configured to delete the timed task message from the preprocessing cache queue in response to the transaction submission success.
In some embodiments, the system further comprises a rollback module configured to: responding to the condition that the service end does not receive the timing task message, and waiting for the corresponding timing task in the preprocessing cache queue to reach the preset time; performing transaction rollback, moving the timing task message from the preprocessing buffer queue to the task buffer queue, and reducing the retry number by one; and responding to the condition that the retry times are not returned to zero, moving the timing task message from the task buffer queue to the preprocessing buffer queue again, and sending the timing task message to a service end again.
In some embodiments, the system further comprises a movement module configured to: and responding to the return-to-zero retry times, moving the timed task message to an overtime buffer queue, and informing the front end that the timed task fails.
In some embodiments, the rollback module is configured to: and calculating the difference value of the current timestamp and the enqueue timestamp of the timing task, and comparing the difference value with the preset time.
In some embodiments, the system further comprises a second rollback module configured to: responding to the condition that the service end does not successfully consume the timed task message, and sending a transaction rollback instruction by the service end; and in response to the transaction not rolling back successfully, waiting for the corresponding timing task in the preprocessing buffer queue to reach the preset time.
In some embodiments, the binding module is configured to: and (4) performing hash complementation on the message body formed by combining the service data types of the timing task message to obtain a message queue index subscript.
In some embodiments, the system further comprises a registration module configured to: and generating a registration subscription relationship according to the configuration parameters to form a new timing task, and writing the new timing task into the task cache queue.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, starting a plurality of working threads to monitor and consume the timing task messages, and binding the working threads with the corresponding message queue index subscripts; s2, pulling a timing task message corresponding to the index subscript of the message queue from the task cache queue, moving the timing task message to a preprocessing cache queue, and sending the timing task message to a service end according to an event identifier; s3, responding to the fact that the service end receives the timing task message and consumption is successful, the service end sends a transaction submitting instruction; and S4, responding to the transaction submission success, deleting the timing task message from the preprocessing buffer queue.
In some embodiments, the steps further comprise: responding to the condition that the service end does not receive the timing task message, and waiting for the corresponding timing task in the preprocessing cache queue to reach the preset time; performing transaction rollback, moving the timing task message from the preprocessing buffer queue to the task buffer queue, and reducing the retry number by one; and responding to the condition that the retry times are not returned to zero, moving the timing task message from the task buffer queue to the preprocessing buffer queue again, and sending the timing task message to a service end again.
In some embodiments, the steps further comprise: and responding to the return-to-zero retry times, moving the timed task message to an overtime buffer queue, and informing the front end that the timed task fails.
In some embodiments, the waiting for the corresponding timing task in the pre-processing buffer queue to reach the predetermined time includes: and calculating the difference value of the current timestamp and the enqueue timestamp of the timing task, and comparing the difference value with the preset time.
In some embodiments, the steps further comprise: responding to the condition that the service end does not successfully consume the timed task message, and sending a transaction rollback instruction by the service end; and in response to the transaction not rolling back successfully, waiting for the corresponding timing task in the preprocessing buffer queue to reach the preset time.
In some embodiments, the binding the worker thread with the corresponding message queue index subscript includes: and (4) performing hash complementation on the message body formed by combining the service data types of the timing task message to obtain a message queue index subscript.
In some embodiments, the steps further comprise: and generating a registration subscription relationship according to the configuration parameters to form a new timing task, and writing the new timing task into the task cache queue.
Fig. 4 is a schematic hardware structure diagram of an embodiment of the computer device for optimizing the timing task according to the present invention.
Taking the device shown in fig. 4 as an example, the device includes a processor 301 and a memory 302.
The processor 301 and the memory 302 may be connected by a bus or other means, such as the bus connection shown in fig. 4.
The memory 302, which is a non-volatile computer-readable storage medium, may be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for optimizing timing tasks in the embodiments of the present application. The processor 301 executes various functional applications of the server and data processing, i.e., implements a method of optimizing timing tasks, by running nonvolatile software programs, instructions, and modules stored in the memory 302.
The memory 302 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the method of optimizing the timing task, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 302 optionally includes memory located remotely from processor 301, which may be connected to a local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Any embodiment of a computer device for performing the method for optimizing timing tasks described above may achieve the same or similar effects as any of the preceding method embodiments corresponding thereto.
The present invention also provides a computer readable storage medium storing a computer program for performing a method of optimizing timing tasks when executed by a processor.
FIG. 5 is a schematic diagram of an embodiment of a computer storage medium for optimizing timing tasks according to the present invention. Taking the computer storage medium as shown in fig. 5 as an example, the computer readable storage medium 401 stores a computer program 402 which, when executed by a processor, performs the method as described above.
Finally, it should be noted that, as one of ordinary skill in the art can appreciate that all or part of the processes of the methods of the above embodiments can be implemented by a computer program to instruct related hardware, and the program of the method for optimizing timing tasks can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the methods as described above. The storage medium of the program may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like. The embodiments of the computer program may achieve the same or similar effects as any of the above-described method embodiments.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.
Claims (10)
1. A method for optimizing a timed task, comprising the steps of:
starting a plurality of working threads to monitor and consume the timed task messages, and binding the working threads with the corresponding message queue index subscripts;
pulling a timing task message corresponding to the index subscript of the release queue from a task cache queue, moving the timing task message to a preprocessing cache queue, and sending the timing task message to a service end according to an event identifier;
responding to the service end receiving the timing task message and the consumption is successful, and the service end sending a transaction submitting instruction; and
and in response to the transaction submission being successful, deleting the timed task message from the pre-processing cache queue.
2. The method of claim 1, further comprising:
responding to the condition that the service end does not receive the timing task message, and waiting for the corresponding timing task in the preprocessing cache queue to reach the preset time;
performing transaction rollback, moving the timing task message from the preprocessing buffer queue to the task buffer queue, and reducing the retry number by one; and
and responding to the condition that the retry times are not returned to zero, moving the timing task message from the task buffer queue to the preprocessing buffer queue again, and sending the timing task message to a service end again.
3. The method of claim 2, further comprising:
and responding to the return-to-zero retry times, moving the timed task message to an overtime buffer queue, and informing the front end that the timed task fails.
4. The method of claim 2, wherein waiting for the corresponding timed task in the pre-processing buffer queue to reach a predetermined time comprises:
and calculating the difference value of the current timestamp and the enqueue timestamp of the timing task, and comparing the difference value with the preset time.
5. The method of claim 2, further comprising:
responding to the condition that the service end does not successfully consume the timed task message, and sending a transaction rollback instruction by the service end; and
and in response to the transaction not rolling back successfully, waiting for the corresponding timing task in the preprocessing buffer queue to reach the preset time.
6. The method of claim 1, wherein binding the worker thread with the corresponding message queue index subscript comprises:
and (4) performing hash complementation on the message body formed by combining the service data types of the timing task message to obtain a message queue index subscript.
7. The method of claim 1, further comprising:
and generating a registration subscription relationship according to the configuration parameters to form a new timing task, and writing the new timing task into the task cache queue.
8. A system for optimizing timed tasks, comprising:
the binding module is configured for starting a plurality of working threads to monitor and consume the timed task messages, and binding the working threads with the corresponding message queue index subscripts;
the pull module is configured to pull the timing task message corresponding to the message queue index subscript from the task cache queue, move the timing task message to the preprocessing cache queue, and send the timing task message to the service end according to the event identifier;
the submitting module is configured to respond to the fact that the service end receives the timed task message and the consumption is successful, and the service end sends a transaction submitting instruction; and
and the deleting module is configured to delete the timed task message from the preprocessing cache queue in response to the successful transaction submission.
9. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210051519.7A CN114063936B (en) | 2022-01-18 | 2022-01-18 | Method, system, equipment and storage medium for optimizing timing task |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210051519.7A CN114063936B (en) | 2022-01-18 | 2022-01-18 | Method, system, equipment and storage medium for optimizing timing task |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114063936A CN114063936A (en) | 2022-02-18 |
CN114063936B true CN114063936B (en) | 2022-03-22 |
Family
ID=80231190
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210051519.7A Active CN114063936B (en) | 2022-01-18 | 2022-01-18 | Method, system, equipment and storage medium for optimizing timing task |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114063936B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114979249A (en) * | 2022-03-30 | 2022-08-30 | 阿里巴巴(中国)有限公司 | Message handle creating method, message pushing method, related device and system |
CN117215755B (en) * | 2023-11-07 | 2024-02-06 | 西安博达软件股份有限公司 | Appointment event task scheduling method and system based on time round algorithm |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104380814A (en) * | 2012-04-19 | 2015-02-25 | 瑞典爱立信有限公司 | Multireceiver timing advance provisioning |
CN110737526A (en) * | 2019-10-22 | 2020-01-31 | 上海思询信息科技有限公司 | method and device for managing timed tasks under Redis-based distributed cluster |
CN111176812A (en) * | 2019-12-27 | 2020-05-19 | 紫光云(南京)数字技术有限公司 | Clustered timing task scheduling system |
CN111338773A (en) * | 2020-02-21 | 2020-06-26 | 华云数据有限公司 | Distributed timed task scheduling method, scheduling system and server cluster |
CN111756811A (en) * | 2020-05-29 | 2020-10-09 | 苏州浪潮智能科技有限公司 | Method, system, device and medium for actively pushing distributed system |
CN112463144A (en) * | 2020-12-02 | 2021-03-09 | 苏州浪潮智能科技有限公司 | Distributed storage command line service method, system, terminal and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2995424B1 (en) * | 2012-09-12 | 2014-09-05 | Bull Sas | METHOD AND DEVICE FOR DEPTH TIME DEPTH FOR A PROCESSING UNIT IN AN INFORMATION PROCESSING SYSTEM |
US20150067028A1 (en) * | 2013-08-30 | 2015-03-05 | Indian Space Research Organisation | Message driven method and system for optimal management of dynamic production workflows in a distributed environment |
-
2022
- 2022-01-18 CN CN202210051519.7A patent/CN114063936B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104380814A (en) * | 2012-04-19 | 2015-02-25 | 瑞典爱立信有限公司 | Multireceiver timing advance provisioning |
CN110737526A (en) * | 2019-10-22 | 2020-01-31 | 上海思询信息科技有限公司 | method and device for managing timed tasks under Redis-based distributed cluster |
CN111176812A (en) * | 2019-12-27 | 2020-05-19 | 紫光云(南京)数字技术有限公司 | Clustered timing task scheduling system |
CN111338773A (en) * | 2020-02-21 | 2020-06-26 | 华云数据有限公司 | Distributed timed task scheduling method, scheduling system and server cluster |
CN111756811A (en) * | 2020-05-29 | 2020-10-09 | 苏州浪潮智能科技有限公司 | Method, system, device and medium for actively pushing distributed system |
CN112463144A (en) * | 2020-12-02 | 2021-03-09 | 苏州浪潮智能科技有限公司 | Distributed storage command line service method, system, terminal and storage medium |
Non-Patent Citations (1)
Title |
---|
基于Linux的分布式因特网监视器系统的设计;姜中华等;《计算机应用》;20030630;303-305 * |
Also Published As
Publication number | Publication date |
---|---|
CN114063936A (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114063936B (en) | Method, system, equipment and storage medium for optimizing timing task | |
US10019297B2 (en) | Systems and methods for implementing bulk handling in asynchronous processing | |
CN105472042B (en) | The message-oriented middleware system and its data transferring method of WEB terminal control | |
US7720920B2 (en) | Client side based data synchronization and storage | |
US20140280988A1 (en) | System and method for parallel multiplexing between servers in a cluster | |
US9519547B2 (en) | Systems and methods for supporting transactional message handling | |
CN108702486B (en) | Low-delay audio and video transmission method and device and computer readable storage medium | |
US8954994B2 (en) | System and method for message service with unit-of-order | |
CN107623731B (en) | Task scheduling method, client, service cluster and system | |
CN112099977A (en) | Real-time data analysis engine of distributed tracking system | |
CN111277639A (en) | Method and device for maintaining data consistency | |
CN113422842A (en) | Distributed power utilization information data acquisition system considering network load | |
US20090055511A1 (en) | Non-programmatic access to data and to data transfer functions | |
WO2023045363A1 (en) | Conference message pushing method, conference server, and electronic device | |
CN110727507B (en) | Message processing method and device, computer equipment and storage medium | |
US8510426B2 (en) | Communication and coordination between web services in a cloud-based computing environment | |
CN108259605B (en) | Data calling system and method based on multiple data centers | |
US20210389998A1 (en) | Architecture for large payload handling in event pipeline | |
CN111475315A (en) | Server and subscription notification push control and execution method | |
CN111158930A (en) | Redis-based high-concurrency time-delay task system and processing method | |
CN116108036A (en) | Method and device for off-line exporting back-end system data | |
EP2963548B1 (en) | Method for enhancing the reliability of a telecommunications network, system, telecommunications network and program | |
CN114500416A (en) | Delivery method and delivery system for at most one message delivery | |
CN108718285B (en) | Flow control method and device of cloud computing cluster and server | |
CN112671636A (en) | Group message pushing method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |