CN111782365A - Timed task processing method, device, equipment and storage medium - Google Patents
Timed task processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN111782365A CN111782365A CN202010615233.8A CN202010615233A CN111782365A CN 111782365 A CN111782365 A CN 111782365A CN 202010615233 A CN202010615233 A CN 202010615233A CN 111782365 A CN111782365 A CN 111782365A
- Authority
- CN
- China
- Prior art keywords
- task
- queue
- execution
- timing
- execution parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 36
- 230000001960 triggered effect Effects 0.000 claims abstract description 21
- 238000000034 method Methods 0.000 claims description 52
- 230000015654 memory Effects 0.000 claims description 20
- 238000004806 packaging method and process Methods 0.000 claims description 8
- 230000003111 delayed effect Effects 0.000 claims 4
- 230000005012 migration Effects 0.000 abstract description 7
- 238000013508 migration Methods 0.000 abstract description 7
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N silicon dioxide Inorganic materials O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 239000010453 quartz Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/109—Time management, e.g. calendars, reminders, meetings or time accounting
- G06Q10/1093—Calendar-based scheduling for persons or groups
- G06Q10/1095—Meeting or appointment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Computational Linguistics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Debugging And Monitoring (AREA)
Abstract
The application discloses a timed task processing method, a timed task processing device, timed task processing equipment and a timed task storage medium, and relates to the fields of intelligent management office platforms and deep learning. The specific implementation scheme is as follows: acquiring a first execution parameter of a first timing task and a first task Identification (ID) for identifying the first timing task, and writing in the database or modifying or deleting information in the database according to the first execution parameter; sending a first delay time period and a first task ID for triggering a first timing task to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by the distributed cache cluster, so that multi-instance extension can be supported, and containerization deployment and migration are supported.
Description
Technical Field
The embodiment of the application relates to an intelligent management office platform and deep learning in source codes, in particular to a timing task processing method, a timing task processing device, timing task processing equipment and a storage medium.
Background
In the online collaborative office platform, a plurality of backend services have a function of timed tasks, for example, a conference service needs to store and periodically notify a user of every week regular meetings, a project service needs to remind participants to complete in time for a period of time before finishing, and the like, and even some services need to clean data regularly. The timing task management of the existing timing task management scheme is maintained in the memory of each instance, if the logic of task execution needs to be modified or other channels of task execution are added, the whole service needs to be updated, deployed and online together, and the influence is large.
Disclosure of Invention
The application provides a timed task processing method, a timed task processing device, timed task processing equipment and a timed task processing storage medium.
According to a first aspect of the present application, there is provided a timed task processing method, including:
acquiring a first execution parameter of a first timing task and a first task Identification (ID) for identifying the first timing task, and writing in a database or modifying or deleting information in the database according to the first execution parameter;
sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster;
acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and
and triggering the execution of the second timing task according to the second task ID and the second execution parameter.
According to a second aspect of the present application, there is provided a timed task processing apparatus comprising:
an obtaining module, configured to obtain a first execution parameter of a first timing task and a first task identifier ID used to identify the first timing task, and write in the database or modify or delete information in the database according to the first execution parameter;
a sending module, configured to send a first delay time period and the first task ID, which are used to trigger the first timing task, to a distributed cache cluster;
the obtaining module is further configured to obtain a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtain a second execution parameter of the second timing task from the database; and
and the triggering module is used for triggering the execution of the second timing task according to the second task ID and the second execution parameter.
According to a third aspect of the application, at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of the first aspect.
One embodiment in the present application has the following advantages or benefits: the method comprises the steps that a first execution parameter of a first timing task and a first task identification ID used for identifying the first timing task are obtained, and writing is carried out on a database or information in the database is modified or deleted according to the first execution parameter; sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by a distributed cache cluster, so that multi-instance extension can be supported, and containerization deployment and migration are supported.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a method for processing a timed task according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for processing a timed task according to another embodiment of the present application;
FIG. 3 is a process diagram of a method for processing a timed task according to the embodiment shown in FIG. 2;
fig. 4 is a schematic flowchart of a method for processing a timed task according to another embodiment of the present application;
FIG. 5 is a process diagram of a method for processing a timed task according to the embodiment shown in FIG. 4;
fig. 6 is a schematic flowchart of a method for processing a timed task according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a method for processing a timed task according to another embodiment of the present application;
FIG. 8 is a process diagram of a method for processing a timed task according to the embodiment shown in FIG. 7;
fig. 9 is a schematic structural diagram of a timed task processing device according to an embodiment of the present application; and
fig. 10 is a block diagram of an electronic device for implementing the timed task processing method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The traditional timing task management mainly comprises the following two solutions: firstly, maintenance is carried out in the memory of each instance, for example, a Quartz library is utilized; and secondly, selecting a master (master) and then uniformly managing the timing task by a master selecting mechanism. However, in the first scheme, each instance is responsible for managing the timed task request received by the instance. If a host is down due to an inequality factor such as power outage, additional detection and scheduling logic is required to load-balance the timing rules to other surviving instances. Such detection scheduling logic needs to run all the time, and ensures that the managed timing task can be accurately migrated and recovered and triggered in time once an instance is unavailable. In the second scheme, the master selection system also needs to consume extra resources to perform master survival monitoring, and once the timed task amount needing to be managed is large, the computing capability of the single machine can reach the bottleneck.
In addition, the above scheme couples the triggering and execution of the timing task, and if the logic of task execution needs to be modified or other channels of task execution are added, the whole service needs to be updated, deployed and online together, and the influence is large.
Fig. 1 is a schematic flow chart of a method for processing a timed task according to an embodiment of the present application, and it should be noted that, in the following embodiments of the present application, an execution subject is taken as an example of a timed task processing device. As shown in fig. 1, the method for processing a timing task provided in this embodiment may include:
step S101, acquiring a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task, and writing in a database or modifying or deleting information in the database according to the first execution parameter.
Specifically, the number of the first timing management instances in this embodiment may be multiple, and the first timing management instances may acquire execution parameters of the timing task, where the execution parameters may include: after the execution parameter and an identifier (identifier) ID for identifying the timing task are obtained, the database may be written in or information in the database may be modified or deleted according to the specific execution parameter. The database in this embodiment may be, for example, a MySQL database.
Step S102, sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster.
For example, the first timing management instance may determine a first delay time period for triggering the first timing task, send the first delay time period to the distributed cache cluster, and enable the distributed cache cluster to set the delay triggering time period of the first task according to the first delay time period.
Step S103, obtaining a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtaining a second execution parameter of the second timing task from the database.
For example, the second timing management instance may obtain a second task ID identifying a second timing task to be triggered from the distributed cache cluster, and obtain a second execution parameter of the second timing task from the database.
And step S104, triggering the execution of the second timing task according to the second task ID and the second execution parameter.
It should be noted that, in this embodiment, steps S101 to S102 may be implemented by a first timing management instance, steps S103 to S104 may be implemented by a second timing management instance, and the first timing management instance and the second timing management instance may be the same or different, and here, the first timing management instance and the second timing management instance are distinguished only to indicate that they may be different, and similarly, the first timing task and the second timing task may be the same or different in this embodiment, and here, the first timing task and the second timing task are distinguished only to indicate that they may be different.
To sum up, in this embodiment, a first execution parameter of a first timed task and a first task identifier ID for identifying the first timed task are obtained, and a database is written in or information in the database is modified or deleted according to the first execution parameter; sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by a distributed cache cluster, so that multi-instance extension can be supported, and containerization deployment and migration are supported.
In one embodiment, before S102, the method further includes:
determining the first delay period according to the first execution parameter.
In particular, the first delay period may be calculated from the first execution parameter and the current time.
In one embodiment, the distributed cache cluster comprises a redis or a memcache.
In one embodiment, the distributed message middleware cluster comprises kafka, activemq, or rabbitmq.
Fig. 2 is a schematic flow chart of a timing task processing method according to another embodiment of the present application, and this embodiment is a possible implementation manner based on the embodiment shown in fig. 1. As shown in fig. 2, the method for processing a timing task provided in this embodiment may include:
step S201, receiving the first execution parameter and the first task ID sent by the first request receiving instance, and writing the first execution parameter into the database or modifying or deleting the information in the database according to the first execution parameter.
For example, in this embodiment, the first timing management instance may directly receive the first execution parameter and the first task ID sent by the request receiving instance.
Step S202, sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster.
Step S203, obtaining a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtaining a second execution parameter of the second timing task from the database.
Steps S202 to S203 are the same as S102 to S103 in fig. 1, and are not described again here.
And step S204, sending the second task ID and the second execution parameter to a first task execution instance.
And step S205, executing the second timing task according to the second task ID and the second execution parameter by the first task execution instance.
Specifically, the second task ID and the second execution parameter may be sent to the first task execution instance, so that the first task execution instance executes the second timed task according to the second task ID and the second execution parameter.
The present embodiment will be described in detail with reference to fig. 3.
Fig. 3 is a process schematic diagram of the timing task processing method according to the embodiment shown in fig. 2. As shown in fig. 3, the following steps may be included:
a1, the first request receiving instance generates a task ID for identifying the timed task request after receiving the timed task request, and sends the timing related information in the timed task request and the generated task ID to the first timing management instance, where the timed task request may include execution parameters of the timed task, and the execution parameters may include: after receiving the execution parameters in the timing task request and the identification ID for identifying the timing task, the first timing management instance may write in the database or modify or delete information in the database according to the specific execution parameters, for example, when the timing task request is a request for creating a timing task, the first timing management instance writes the related information of the created timing task into the database; when the timing task request is a timing task modification request, the first timing management instance modifies the corresponding information in the database according to the relevant information of the timing task modification; when the timing task request is a timing task deleting request, the first timing management instance deletes the corresponding information in the database according to the relevant information of the timing task deleting request.
b1, when the timed task request is a request for modifying or deleting the timed task, the first timed management instance will read the timed task information to be modified or cancelled from the database, perform the existence check, and feed back the result of the existence check to the first request receiving instance.
c1, the first timing management instance sends the first delay time period and the first task ID for triggering the first timing task to the distributed cache cluster, and the second timing management instance obtains the second task ID for identifying the second timing task to be triggered from the distributed cache cluster and obtains the second execution parameter of the second timing task from the database.
d1, after acquiring a second task ID and a second execution parameter, the second timing management instance sends the second task ID and the second execution parameter to the first task execution instance, and the first task execution instance executes the second timing task according to the second task ID and the second execution parameter.
e1, when the first request receiving instance receives the instant task request, it will directly send the task execution information in the instant task request to the first task executing instance for execution.
In this embodiment, the first execution parameter and the first task ID sent by the instance are received by receiving a first request, and the database is written or the information in the database is modified or deleted according to the first execution parameter; sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and sending the second task ID and the second execution parameter to a first task execution instance, wherein the first task execution instance executes the second timing task according to the second task ID and the second execution parameter, and the execution of the timing task is uniformly distributed by a distributed cache cluster, so that multi-instance expansion can be supported, and containerization deployment and migration are supported.
Fig. 4 is a schematic flowchart of a timing task processing method according to another embodiment of the present application, and on the basis of the embodiment shown in fig. 1, this embodiment is another possible implementation manner. As shown in fig. 4, the method for processing a timing task provided in this embodiment may include:
step S401, writing the first execution parameter and the first task ID into a distributed message middleware cluster through a second request receiving instance.
Step S402, reading the first execution parameter and the first task ID from the distributed message middleware cluster.
Specifically, in this embodiment, when receiving the timing task request, the second request receiving instance writes the first execution parameter and the first task ID into the distributed message middleware cluster, so that the distributed message middleware cluster returns an acknowledgement feedback to the second request receiving instance, thereby avoiding a situation that the first timing management instance directly returns an acknowledgement feedback to the second request receiving instance but fails to process the corresponding timing management request in time.
Step S403, sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster.
Step S404, obtaining a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtaining a second execution parameter of the second timing task from the database.
Steps S403 to S404 are the same as S102 to S103 in fig. 1, and are not described again here.
Step S405, writing the instant task information corresponding to the second task ID and the second execution parameter into the distributed message middleware cluster.
Step S406, reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
Similarly, the distributed message middleware cluster may also provide a buffering process between the second timing management instance and the second task execution instance to relieve processing pressure of the second task execution instance.
This embodiment will be described in detail with reference to fig. 5.
Fig. 5 is a process diagram of the method for processing the timed task according to the embodiment shown in fig. 4. As shown in fig. 5, the following steps may be included:
a2, after the second request receiving module receives the timing task request, if the received timing task request is a request for modifying or deleting the timing task, the second request receiving module will read the relevant information of the timing task to be modified or cancelled from the database to perform the existence check.
b2, the second request receiving module writes the timing task execution parameter in the received timing task request and the task ID generated by the second request receiving module for the received timing task request into the distributed message middleware cluster.
c2, the first timing management module reads the timing task execution parameter from the distributed message middleware cluster.
d2, adding the timing task execution parameter in the database by the first timing management module or modifying or deleting the corresponding timing task execution parameter in the database
e2, the first timing management module adds the timing task time and ID of the next trigger in the delay queue of the distributed cache cluster, or the second timing management module takes out the current trigger timing task ID from the delay queue and obtains the execution parameter of the current trigger timing task from the database through d 2.
f2, the second timing management module writes instant task information corresponding to the currently triggered timing task to the distributed messaging middleware cluster.
And g2, the second task execution instance reads the instant task information in the distributed message middleware cluster to perform operation.
As can be seen from fig. 5, the request receiving instance, the timing management instance, and the task execution instance in this embodiment are completely decoupled, so when the logic of task execution needs to be modified or other channels of task execution are added, only the task implementation module needs to be updated, and lightweight deployment is implemented.
In this embodiment, since the execution of the timing task is uniformly distributed by the distributed cache cluster, multi-instance extension and containerization deployment and migration can be supported, and the addition of the distributed message middleware cluster can provide a buffering process for the reception of the timing task request to relieve the processing pressure of the first timing management instance. In addition, because the request receiving instance, the timing management instance and the task execution instance are completely decoupled, when the logic of task execution needs to be modified or other channels of task execution are added, the task implementation module only needs to be updated, and lightweight deployment is achieved.
Fig. 6 is a flowchart illustrating a method for processing a timed task according to another embodiment of the present application, where based on the embodiment shown in fig. 4, a distributed message middleware cluster includes a timed queue and an instant queue. As shown in fig. 6, the method for processing a timing task provided in this embodiment may include:
step S601, writing the first execution parameter and the first task ID into the timing queue through a second request receiving instance.
Step S602, reading the first execution parameter and the first task ID from the timing queue.
Step S603, sending a first delay time period for triggering the first timing task and the first task ID to the distributed cache cluster.
Step S604, obtaining a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtaining a second execution parameter of the second timing task from the database.
Steps S603-S604 are the same as S102-S103 in fig. 1, and are not described here again.
Step S605, writing the instant task information and the second execution parameter into the instant queue.
Step S606, reading the instant task information from the instant queue through a second task execution instance, and executing the second timing task according to the instant task information.
Specifically, the request task receiving instance writes the received timing task related information and instant task related information into two different topics of the distributed message middleware cluster, for example, topic a and topic B, respectively, so as to be read by the timing management instance and the task execution instance, and then returns a response.
The timing management instance is responsible for consuming timing rule information from topic A, writing or modifying the timing rule information into a database for storage, and writing the next timing task ID into a distributed delay queue of the distributed cache cluster. In addition, the timing management example also has a thread pool, acquires the task ID to be triggered from the delay queue, reads the specific information of the task from the database, calculates the next trigger time, writes the trigger time into the delay queue of the distributed cache cluster, and writes the specific information into the topic B of the distributed message middleware cluster.
In this embodiment, the timing queue and the instant queue are divided in the distributed message middleware cluster, so that classified management of data can be realized, and reading of the timing management instance and the task execution instance is facilitated.
Fig. 7 is a schematic flow chart of a timing task processing method according to another embodiment of the present application, where based on the embodiment shown in fig. 1, a distributed cache cluster includes a first queue, a second queue, and a third queue, where the first queue and the third queue are delay queues, and the second queue is a non-delay queue. As shown in fig. 7, the method for processing a timing task according to this embodiment may include:
step S701, obtaining a first execution parameter of a first timing task and a first task identifier ID for identifying the first timing task, and writing in a database or modifying or deleting information in the database according to the first execution parameter.
In this embodiment, S701 and S101 are similar, and are not described herein again.
Step S702, sending the first delay time period to the distributed cache cluster, and storing the first task ID in the first queue.
Step S703, after the first delay time period expires, transferring the first task ID from the first queue to the second queue through the distributed cache cluster, backing up the first task ID to the third queue, and setting a second delay time period.
Step S704, obtaining the first task ID from the second queue, and obtaining the first execution parameter from the database.
Step S705, determining that the first task ID and the first execution parameter are successfully packed and sent, and sending a successful packing and sending notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; alternatively, the first task ID is transferred from the third queue to the second queue by the distributed cache cluster after the second delay period expires.
Step S706, triggering the execution of the first timing task according to the first task ID and the first execution parameter.
Specifically, since the distributed queues of the distributed cache cluster do not have a message signing mechanism, in order to ensure that a loss of a timed task trigger due to a downtime of an instance does not occur in processing before information is written into the distributed message middleware cluster, the embodiment writes a task ID currently being processed into another delay queue and sets a timeout time when the ID is acquired. If the instance does not sign for the ID within the timeout period, the task ID is re-executed by another instance, ensuring that the task is not missing.
The present embodiment will be described in detail below with reference to fig. 8.
Fig. 8 is a process diagram of the method for processing the timed task according to the embodiment shown in fig. 7. As shown in fig. 8, the following steps may be included:
a3, the first timing management instance stores the first task ID in the first queue, and sets the delay to be a first delay period, for example, 1 hour.
b3, after the first delay period expires, the distributed cache cluster transferring the first task ID from the first queue to the second queue.
c3, the distributed cache cluster backs up the first task ID to the third queue and sets a second delay period, e.g., 30 s.
d3, the second timing management instance obtains the first task ID from the second queue, obtains the first execution parameter from the database, and packages and sends the first execution parameter.
e3.1, the second timing management instance determines that the packaging and sending of the first task ID and the first execution parameter are successful, and sends a packaging and sending success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue.
e3.2, a problem with packed transmission (e.g., an example down), expiration of a second delay period, the distributed cache cluster transferring the first task ID from the third queue to the second queue.
It will be appreciated that only one of the above e3.1 and e3.2 occurs and that a3 and d3 may be performed by two different timing management instances.
In this embodiment, a first queue, a second queue, and a third queue are set in a distributed cache cluster, and after the first delay time period expires, the distributed cache cluster transfers the first task ID from the first queue to the second queue, backs up the first task ID to the third queue, and sets a second delay time period, and then a second timing management instance determines that the packet transmission of the first task ID and the first execution parameter is successful, and sends a packet transmission success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or, when the second delay time period expires, the distributed cache cluster transfers the first task ID from the third queue to the second queue, so as to ensure that the task is not missed and ensure the safe processing of the timed task.
Fig. 9 is a schematic structural diagram of a timed task processing device according to an embodiment of the present application. As shown in fig. 9, the predictive model modeling apparatus provided in this embodiment includes an obtaining module 91, a sending module 92, and a triggering module 93, wherein,
an obtaining module 91, configured to obtain a first execution parameter of a first timing task and a first task identifier ID used for identifying the first timing task, and write in the database or modify or delete information in the database according to the first execution parameter;
a sending module 92, configured to send a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster;
the obtaining module 91 is further configured to obtain a second task ID used for identifying a second timing task to be triggered from the distributed cache cluster, and obtain a second execution parameter of the second timing task from the database; and
and a triggering module 93, configured to trigger execution of the second timing task according to the second task ID and the second execution parameter.
In an embodiment, the obtaining module 91 is specifically configured to receive the first execution parameter and the first task ID sent by the first request receiving instance.
In one embodiment, the sending module 92 is further configured to send the second task ID and the second execution parameter to the first task execution instance; and
the triggering module 93 is further configured to execute the second timing task according to the second task ID and the second execution parameter by using the first task execution instance.
In an embodiment, the obtaining module 91 is specifically configured to:
writing the first execution parameter and the first task ID to a distributed message middleware cluster by a second request receiving instance; and
reading the first execution parameter and the first task ID from the distributed message middleware cluster.
In one embodiment, the triggering module 93 is specifically configured to:
writing the instant task information and the second execution parameter corresponding to the second task ID into the distributed message middleware cluster; and
and reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
In one embodiment, the distributed message middleware cluster includes a timing queue and an instant queue;
the obtaining module 91 is specifically configured to:
writing the first execution parameter and the first task ID to the timing queue via the second request receiving instance; and
reading the first execution parameter and the first task ID from the timing queue;
the triggering module 93 is specifically configured to:
writing the instant task information and the second execution parameter into the instant queue; and
and reading the instant task information from the instant queue through the second task execution instance.
In one embodiment, the distributed cache cluster includes a first queue, a second queue and a third queue, the first queue and the third queue are delay queues, and the second queue is a non-delay queue;
the sending module 92 is further configured to:
sending the first delay time period to the distributed cache cluster, and storing the first task ID in the first queue; and
transferring, by the distributed cache cluster, the first task ID from the first queue to the second queue and backing up the first task ID to the third queue after the first latency period expires, and setting a second latency period;
the obtaining module 91 is further configured to obtain the first task ID from the second queue, and obtain the first execution parameter from the database; and
the triggering module 93 is further configured to:
determining that the packaging and sending of the first task ID and the first execution parameter is successful, and sending a packaging and sending success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or, after the second delay period expires, transferring, by the distributed cache cluster, the first task ID from the third queue to the second queue; and
and triggering the execution of the first timing task according to the first task ID and the first execution parameter.
In an embodiment, the obtaining module 91 is further configured to determine the first delay time period according to the first execution parameter before sending the first delay time period for triggering the first timing task and the first task ID to the distributed cache cluster.
In one embodiment, the distributed cache cluster comprises a redis or a memcache.
In one embodiment, the distributed message middleware cluster comprises kafka, activemq, or rabbitmq.
The timed task processing device provided in each embodiment of the present application can be used to execute the method shown in each corresponding embodiment, and the implementation manner and principle thereof are the same, and are not described again.
The timed task processing method and device provided by the embodiment of the application are applied to an intelligent management office platform in a source code, and write in a database or modify or delete information in the database according to a first execution parameter by acquiring the first execution parameter of a first timed task and a first task identifier ID for identifying the first timed task; sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by a distributed cache cluster, so that multi-instance extension can be supported, and containerization deployment and migration are supported.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 10 is a block diagram of an electronic device according to the timed task processing method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 10, the electronic apparatus includes: one or more processors 1001, memory 1002, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 10 illustrates an example of one processor 1001.
The memory 1002 is a non-transitory computer readable storage medium provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of the timed task processing method provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the timed task processing method provided by the present application.
The memory 1002, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the methods of the timed task processing method in the embodiments of the present application (for example, the obtaining module 91, the sending module 92, and the triggering module 93 shown in fig. 9). The processor 1001 executes various functional applications of the server and data processing, i.e., implements the timed task processing method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 1002.
The memory 1002 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the timed task processing method, and the like. Further, the memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1002 may optionally include memory located remotely from the processor 1001, which may be connected to the electronic device of the timed task processing method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the timed task processing method may further include: an input device 1003 and an output device 004. The processor 1001, the memory 1002, the input device 1003, and the output device 1004 may be connected by a bus or other means, and the bus connection is exemplified in fig. 10.
The input device 1003 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the timed task processing method, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 1004 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, a first execution parameter of a first timing task and a first task Identification (ID) for identifying the first timing task are obtained, and writing is carried out on a database or information in the database is modified or deleted according to the first execution parameter; sending, to a distributed cache cluster, a first delay period for triggering the first timed task and the first task ID; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by a distributed cache cluster, so that multi-instance extension can be supported, and containerization deployment and migration are supported. It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (22)
1. A timed task processing method comprises the following steps:
acquiring a first execution parameter of a first timing task and a first task Identification (ID) for identifying the first timing task, and writing in a database or modifying or deleting information in the database according to the first execution parameter;
sending a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster;
acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and
and triggering the execution of the second timing task according to the second task ID and the second execution parameter.
2. The method of claim 1, wherein the obtaining a first execution parameter of a first timed task and a first task Identification (ID) for identifying the first timed task comprises:
and receiving the first execution parameter and the first task ID sent by a first request receiving instance.
3. The method of claim 2, wherein said triggering execution of the second timed task according to the second task ID and the second execution parameter comprises:
sending the second task ID and the second execution parameter to a first task execution instance; and executing, by the first task execution instance, the second timed task according to the second task ID and the second execution parameter.
4. The method of claim 1, wherein the obtaining a first execution parameter of a first timed task and a first task Identification (ID) for identifying the first timed task comprises:
writing the first execution parameter and the first task ID to a distributed message middleware cluster by a second request receiving instance; and
reading the first execution parameter and the first task ID from the distributed message middleware cluster.
5. The method of claim 4, wherein said triggering execution of the second timed task according to the second task ID and the second execution parameter comprises:
writing the instant task information and the second execution parameter corresponding to the second task ID into the distributed message middleware cluster; and
and reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
6. The method of claim 5, wherein the distributed message middleware cluster includes a timing queue and an instant queue;
the writing, by a second request-receiving instance, the first execution parameter and the first task ID to the distributed message-middleware cluster, comprising:
writing the first execution parameter and the first task ID to the timing queue via a second request receiving instance;
the reading the first execution parameter and the first task ID from the distributed message middleware cluster comprises:
reading the first execution parameter and the first task ID from the timing queue;
the writing the instant task information and the second execution parameter corresponding to the second task ID into the distributed message middleware cluster includes:
writing the instant task information and the second execution parameter into the instant queue; and
the reading the instant task information from the distributed message middleware cluster through a second task execution instance comprises:
and reading the instant task information from the instant queue through the second task execution instance.
7. The method of any of claims 1-6, wherein the distributed cache cluster includes a first queue, a second queue, and a third queue, the first queue and the third queue being delayed queues, the second queue being a non-delayed queue;
the sending the first delay period for triggering the first timed task and the first task ID to a distributed cache cluster, comprising:
sending the first delay time period to the distributed cache cluster, and storing the first task ID in the first queue;
the method further comprises the following steps:
transferring, by the distributed cache cluster, the first task ID from the first queue to the second queue and backing up the first task ID to the third queue after the first latency period expires, and setting a second latency period;
acquiring a second task ID for identifying a second timed task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timed task from the database, including:
obtaining the first task ID from the second queue and the first execution parameter from the database; and
the method further comprises the following steps:
determining that the packaging and sending of the first task ID and the first execution parameter is successful, and sending a packaging and sending success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or, after the second delay period expires, transferring, by the distributed cache cluster, the first task ID from the third queue to the second queue;
the triggering the execution of the second timing task according to the second task ID and the second execution parameter includes:
and triggering the execution of the first timing task according to the first task ID and the first execution parameter.
8. The method of any of claims 1-7, wherein prior to sending the first delay period for triggering the first timed task and the first task ID to a distributed cache cluster, further comprising:
determining the first delay period according to the first execution parameter.
9. The method of any of claims 1-8, wherein the distributed cache cluster comprises a redis or a memcache.
10. The method of any of claims 4-6, wherein the distributed message middleware cluster comprises kafka, activemq, or rabbitmq.
11. A timed task processing apparatus comprising:
an obtaining module, configured to obtain a first execution parameter of a first timing task and a first task identifier ID used to identify the first timing task, and write in the database or modify or delete information in the database according to the first execution parameter;
a sending module, configured to send a first delay time period and the first task ID, which are used to trigger the first timing task, to a distributed cache cluster;
the obtaining module is further configured to obtain a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtain a second execution parameter of the second timing task from the database; and
and the triggering module is used for triggering the execution of the second timing task according to the second task ID and the second execution parameter.
12. The apparatus according to claim 11, wherein the obtaining module is specifically configured to receive the first execution parameter and the first task ID sent by the first request receiving instance.
13. The apparatus of claim 12, wherein:
the sending module is further configured to send the second task ID and the second execution parameter to a first task execution instance; and
the triggering module is further configured to execute the second timing task according to the second task ID and the second execution parameter by the first task execution instance.
14. The apparatus according to claim 11, wherein the obtaining module is specifically configured to:
writing the first execution parameter and the first task ID to a distributed message middleware cluster by a second request receiving instance; and
reading the first execution parameter and the first task ID from the distributed message middleware cluster.
15. The apparatus according to claim 14, wherein the triggering module is specifically configured to:
writing the instant task information and the second execution parameter corresponding to the second task ID into the distributed message middleware cluster; and
and reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
16. The apparatus of claim 15, wherein the distributed message middleware cluster includes a timing queue and an instant queue therein;
the acquisition module is specifically configured to:
writing the first execution parameter and the first task ID to the timing queue via the second request receiving instance; and
reading the first execution parameter and the first task ID from the timing queue;
the trigger module is specifically configured to:
writing the instant task information and the second execution parameter into the instant queue; and
and reading the instant task information from the instant queue through the second task execution instance.
17. The apparatus of any of claims 11-16, wherein the distributed cache cluster comprises a first queue, a second queue, and a third queue, the first queue and the third queue being delayed queues, the second queue being a non-delayed queue;
the sending module is further configured to:
sending the first delay time period to the distributed cache cluster, and storing the first task ID in the first queue; and
transferring, by the distributed cache cluster, the first task ID from the first queue to the second queue and backing up the first task ID to the third queue after the first latency period expires, and setting a second latency period;
the obtaining module is further configured to obtain the first task ID from the second queue, and obtain the first execution parameter from the database; and
the triggering module is further configured to:
determining that the packaging and sending of the first task ID and the first execution parameter is successful, and sending a packaging and sending success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or, after the second delay period expires, transferring, by the distributed cache cluster, the first task ID from the third queue to the second queue; and
and triggering the execution of the first timing task according to the first task ID and the first execution parameter.
18. The apparatus of any of claims 11-17, wherein the means for obtaining is further configured to determine the first latency period based on the first execution parameter prior to sending the first latency period for triggering the first timed task and the first task ID to a distributed cache cluster.
19. The apparatus of any of claims 11-18, wherein the distributed cache cluster comprises a redis or a memcache.
20. The apparatus of any of claims 14-16, wherein the distributed message middleware cluster comprises kafka, activemq, or rabbitmq.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010615233.8A CN111782365B (en) | 2020-06-30 | 2020-06-30 | Timed task processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010615233.8A CN111782365B (en) | 2020-06-30 | 2020-06-30 | Timed task processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111782365A true CN111782365A (en) | 2020-10-16 |
CN111782365B CN111782365B (en) | 2024-03-08 |
Family
ID=72759995
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010615233.8A Active CN111782365B (en) | 2020-06-30 | 2020-06-30 | Timed task processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111782365B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112579269A (en) * | 2020-12-04 | 2021-03-30 | 深圳前海微众银行股份有限公司 | Timed task processing method and device |
CN112966005A (en) * | 2021-03-08 | 2021-06-15 | 平安科技(深圳)有限公司 | Timing message sending method and device, computer equipment and storage medium |
CN113159590A (en) * | 2021-04-27 | 2021-07-23 | 海信集团控股股份有限公司 | Medication management method, server and mobile terminal |
CN113448699A (en) * | 2020-12-30 | 2021-09-28 | 北京新氧科技有限公司 | Distributed timed task processing system, method and related device |
CN113742044A (en) * | 2021-09-09 | 2021-12-03 | 平安科技(深圳)有限公司 | Timed task management method, device, equipment and storage medium |
CN113792097A (en) * | 2021-01-26 | 2021-12-14 | 北京沃东天骏信息技术有限公司 | Delay trigger estimation method and device for display information, medium and electronic equipment |
CN113934731A (en) * | 2021-11-05 | 2022-01-14 | 盐城金堤科技有限公司 | Task execution method and device, storage medium and electronic equipment |
CN116431318A (en) * | 2023-06-13 | 2023-07-14 | 云账户技术(天津)有限公司 | Timing task processing method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108287751A (en) * | 2017-01-09 | 2018-07-17 | 阿里巴巴集团控股有限公司 | Task executing method and device, distributed system |
CN109582466A (en) * | 2017-09-29 | 2019-04-05 | 北京金山软件有限公司 | A kind of timed task executes method, distributed server cluster and electronic equipment |
US20190163545A1 (en) * | 2017-11-30 | 2019-05-30 | Oracle International Corporation | Messages with delayed delivery in an in-database sharded queue |
CN110347492A (en) * | 2019-07-15 | 2019-10-18 | 深圳前海乘势科技有限公司 | Distributed task dispatching method and apparatus based on time parameter method |
WO2019237531A1 (en) * | 2018-06-14 | 2019-12-19 | 平安科技(深圳)有限公司 | Network node monitoring method and system |
-
2020
- 2020-06-30 CN CN202010615233.8A patent/CN111782365B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108287751A (en) * | 2017-01-09 | 2018-07-17 | 阿里巴巴集团控股有限公司 | Task executing method and device, distributed system |
CN109582466A (en) * | 2017-09-29 | 2019-04-05 | 北京金山软件有限公司 | A kind of timed task executes method, distributed server cluster and electronic equipment |
US20190163545A1 (en) * | 2017-11-30 | 2019-05-30 | Oracle International Corporation | Messages with delayed delivery in an in-database sharded queue |
WO2019237531A1 (en) * | 2018-06-14 | 2019-12-19 | 平安科技(深圳)有限公司 | Network node monitoring method and system |
CN110347492A (en) * | 2019-07-15 | 2019-10-18 | 深圳前海乘势科技有限公司 | Distributed task dispatching method and apparatus based on time parameter method |
Non-Patent Citations (1)
Title |
---|
曹海涛;胡牧;蒋厚明;: "基于集群节点间即时拷贝的会话同步技术研究", 计算机系统应用, no. 03 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112579269A (en) * | 2020-12-04 | 2021-03-30 | 深圳前海微众银行股份有限公司 | Timed task processing method and device |
CN112579269B (en) * | 2020-12-04 | 2024-07-02 | 深圳前海微众银行股份有限公司 | Timing task processing method and device |
CN113448699A (en) * | 2020-12-30 | 2021-09-28 | 北京新氧科技有限公司 | Distributed timed task processing system, method and related device |
CN113792097A (en) * | 2021-01-26 | 2021-12-14 | 北京沃东天骏信息技术有限公司 | Delay trigger estimation method and device for display information, medium and electronic equipment |
CN112966005A (en) * | 2021-03-08 | 2021-06-15 | 平安科技(深圳)有限公司 | Timing message sending method and device, computer equipment and storage medium |
CN112966005B (en) * | 2021-03-08 | 2023-07-25 | 平安科技(深圳)有限公司 | Timing message sending method, device, computer equipment and storage medium |
CN113159590A (en) * | 2021-04-27 | 2021-07-23 | 海信集团控股股份有限公司 | Medication management method, server and mobile terminal |
CN113742044A (en) * | 2021-09-09 | 2021-12-03 | 平安科技(深圳)有限公司 | Timed task management method, device, equipment and storage medium |
CN113934731A (en) * | 2021-11-05 | 2022-01-14 | 盐城金堤科技有限公司 | Task execution method and device, storage medium and electronic equipment |
CN116431318A (en) * | 2023-06-13 | 2023-07-14 | 云账户技术(天津)有限公司 | Timing task processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111782365B (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111782365B (en) | Timed task processing method, device, equipment and storage medium | |
CN110806923B (en) | Parallel processing method and device for block chain tasks, electronic equipment and medium | |
CN111694857B (en) | Method, device, electronic equipment and computer readable medium for storing resource data | |
CN112527899A (en) | Data synchronization method, device, equipment and storage medium | |
CN111694646A (en) | Resource scheduling method and device, electronic equipment and computer readable storage medium | |
CN111782147B (en) | Method and device for cluster expansion and contraction capacity | |
CN110647570B (en) | Data processing method and device and electronic equipment | |
KR20210036874A (en) | Method and apparatus for processing development machine operation task, device and storage medium | |
CN111930487A (en) | Job flow scheduling method and device, electronic equipment and storage medium | |
CN111510480B (en) | Request sending method and device and first server | |
KR20210092689A (en) | Method and apparatus for traversing graph database | |
CN111600790B (en) | Block chain based message processing method, device, equipment and storage medium | |
CN110995504A (en) | Micro-service node exception handling method, device and system | |
CN112540914A (en) | Execution method, execution device, server and storage medium for unit test | |
CN110750419B (en) | Offline task processing method and device, electronic equipment and storage medium | |
CN110688229B (en) | Task processing method and device | |
CN115576684A (en) | Task processing method and device, electronic equipment and storage medium | |
CN112565356A (en) | Data storage method and device and electronic equipment | |
CN111782341A (en) | Method and apparatus for managing clusters | |
EP3859529A2 (en) | Backup management method and system, electronic device and medium | |
CN111782357B (en) | Label control method and device, electronic equipment and readable storage medium | |
CN111966471A (en) | Access method, device, electronic equipment and computer storage medium | |
CN111258954B (en) | Data migration method, device, equipment and storage medium | |
CN112099933B (en) | Task operation and query method and device, electronic equipment and storage medium | |
CN113961641A (en) | Database synchronization method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |