CN111782365B - Timed task processing method, device, equipment and storage medium - Google Patents

Timed task processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111782365B
CN111782365B CN202010615233.8A CN202010615233A CN111782365B CN 111782365 B CN111782365 B CN 111782365B CN 202010615233 A CN202010615233 A CN 202010615233A CN 111782365 B CN111782365 B CN 111782365B
Authority
CN
China
Prior art keywords
task
queue
timing
execution
execution parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010615233.8A
Other languages
Chinese (zh)
Other versions
CN111782365A (en
Inventor
彭程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010615233.8A priority Critical patent/CN111782365B/en
Publication of CN111782365A publication Critical patent/CN111782365A/en
Application granted granted Critical
Publication of CN111782365B publication Critical patent/CN111782365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1093Calendar-based scheduling for persons or groups
    • G06Q10/1095Meeting or appointment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Debugging And Monitoring (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a timing task processing method, a timing task processing device, timing task processing equipment and a storage medium, and relates to the field of intelligent management office platforms and deep learning. The specific implementation scheme is as follows: acquiring a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task, and writing the database or modifying or deleting information in the database according to the first execution parameter; transmitting a first delay period for triggering a first timing task and a first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from a database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by the distributed cache cluster, so that multi-instance expansion can be supported, and containerized deployment and migration are supported.

Description

Timed task processing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to an intelligent management office platform and deep learning in source codes, in particular to a timing task processing method, a timing task processing device, timing task processing equipment and a storage medium.
Background
In the online collaborative office platform, there are many back-end services with functions of timing tasks, such as meeting services need to save and periodically notify users of each week of regular meetings, project services need to remind participants to complete in time before ending, and even some services need to clear data at regular time. The timing task management of the existing timing task management scheme is maintained in the memory of each instance, and if the logic of task execution needs to be modified or other channels for task execution are added, the whole service needs to be updated, deployed and online together, so that the influence is large.
Disclosure of Invention
The application provides a timing task processing method, a timing task processing device, timing task processing equipment and a storage medium.
According to a first aspect of the present application, there is provided a timed task processing method, including:
acquiring a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task, and writing a database or modifying or deleting information in the database according to the first execution parameter;
transmitting a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster;
Acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and
and triggering the execution of the second timing task according to the second task ID and the second execution parameter.
According to a second aspect of the present application, there is provided a timed task processing device comprising:
the acquisition module is used for acquiring a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task, and writing the database or modifying or deleting information in the database according to the first execution parameter;
a sending module, configured to send a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster;
the acquisition module is further configured to acquire a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquire a second execution parameter of the second timing task from the database; and
and the triggering module is used for triggering the execution of the second timing task according to the second task ID and the second execution parameter.
According to a third aspect of the present application, at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect described above.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect.
According to a fifth aspect of the present application, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
One embodiment of the present application has the following advantages or benefits: the method comprises the steps of obtaining a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task, and writing a database or modifying or deleting information in the database according to the first execution parameter; transmitting a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by the distributed cache cluster, so that multi-instance expansion can be supported, and containerized deployment and migration are supported.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a flow chart of a method for processing a timing task according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for processing a timing task according to another embodiment of the present disclosure;
FIG. 3 is a process diagram of the timing task processing method provided in the embodiment shown in FIG. 2;
FIG. 4 is a flowchart of a method for processing a timing task according to another embodiment of the present disclosure;
FIG. 5 is a process diagram of the timing task processing method provided by the embodiment shown in FIG. 4;
FIG. 6 is a flowchart of a timing task processing method according to another embodiment of the present disclosure;
FIG. 7 is a flowchart of a method for processing a timed task according to another embodiment of the present disclosure;
FIG. 8 is a process diagram of the timing task processing method provided by the embodiment shown in FIG. 7;
FIG. 9 is a schematic diagram of a timing task processing device according to an embodiment of the present disclosure; and
Fig. 10 is a block diagram of an electronic device for implementing the timed task processing method of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The traditional timing task management mainly has the following two solutions: one is maintained in memory of each instance, for example using a Quartz library; secondly, a host (master) is selected through a host selection mechanism to uniformly manage timing tasks later. In the first approach, however, each instance is responsible for managing the timed task requests that the instance receives. If the host is down due to an uncomfortable factor such as a power outage, additional detection and scheduling logic is required to load-balance these timing rules to other surviving instances. Such detection scheduling logic needs to run all the time, ensuring that once an instance is not available, the managed timed task can be accurately migrated, restored and timely triggered. In the second scheme, the master survival monitoring is also performed by using the master selection mechanism, and once the amount of timing tasks to be managed is large, the computing capacity of the single machine can reach the bottleneck.
In addition, the scheme can couple the triggering and execution of the timing task, and if the logic of task execution needs to be modified or other channels of task execution are added, the whole service needs to be updated, deployed and online together, so that the influence is large.
Fig. 1 is a flow chart of a timed task processing method according to an embodiment of the present application, and it should be noted that in the following embodiments of the present application, an execution body is taken as an example of a timed task processing device. As shown in fig. 1, the timing task processing method provided in this embodiment may include:
step S101, a first execution parameter of a first timing task and a first task identifier ID for identifying the first timing task are obtained, and a database is written in or information in the database is modified or deleted according to the first execution parameter.
Specifically, the first timing management instance in this embodiment may be multiple, which may obtain an execution parameter of the timing task, where the execution parameter may include: the method comprises the steps of acquiring parameters such as specific task, task starting time, task execution interval time unit, task ending time and the like, and after acquiring the execution parameters and an identifier (identifier) ID for identifying the timing task, writing the database or modifying or deleting information in the database according to the specific execution parameters. The database in this embodiment may be, for example, a MySQL database.
Step S102, a first delay period for triggering the first timing task and the first task ID are sent to a distributed cache cluster.
For example, the first timing management instance may determine a first delay period for triggering the first timing task and send the first delay period to the distributed cache cluster, and cause the distributed cache cluster to set the delay trigger period for the first task according to the first delay period.
Step S103, acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database.
For example, a second timing management instance may obtain a second task ID from the distributed cache cluster identifying a second timing task to be triggered and obtain a second execution parameter of the second timing task from the database.
And step S104, triggering the execution of the second timing task according to the second task ID and the second execution parameter.
It should be noted that, steps S101 to S102 in the present embodiment may be implemented by a first timing management instance, steps S103 to S104 may be implemented by a second timing management instance, and the first timing management instance and the second timing management instance may be the same or different, where the first timing management instance and the second timing management instance are distinguished only to indicate that they may be different, and similarly, the first timing task and the second timing task may be the same or different, and where the first timing task and the second timing task are distinguished only to indicate that they may be different.
In summary, in this embodiment, by acquiring the first execution parameter of the first timing task and the first task identifier ID for identifying the first timing task, and writing the database or modifying or deleting the information in the database according to the first execution parameter; transmitting a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by the distributed cache cluster, so that multi-instance expansion can be supported, and containerized deployment and migration are supported.
In one embodiment, before S102, further comprising:
and determining the first delay time period according to the first execution parameter.
Specifically, the first delay period may be calculated from the first execution parameter and the current time.
In one embodiment, the distributed cache cluster includes redis or memcache.
In one embodiment, the distributed message middleware cluster includes kafka, activemq or rubbitmq.
Fig. 2 is a schematic flow chart of a timing task processing method according to another embodiment of the present application, and on the basis of the embodiment shown in fig. 1, this embodiment is a possible implementation manner. As shown in fig. 2, the timing task processing method provided in this embodiment may include:
step S201, receiving the first execution parameter and the first task ID sent by the first request receiving instance, and writing the database or modifying or deleting the information in the database according to the first execution parameter.
For example, the first timing management instance in this embodiment may directly receive the first execution parameter and the first task ID sent by the request receiving instance.
Step S202, a first delay period for triggering the first timing task and the first task ID are sent to a distributed cache cluster.
Step S203, obtaining a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtaining a second execution parameter of the second timing task from the database.
Steps S202 to S203 are the same as steps S102 to S103 in fig. 1, and are not described here again.
Step S204, sending the second task ID and the second execution parameter to the first task execution instance.
Step S205, executing the second timing task according to the second task ID and the second execution parameter by the first task execution instance.
Specifically, the second task ID and the second execution parameter may be sent to the first task execution instance, such that the first task execution instance executes the second timed task according to the second task ID and the second execution parameter.
The present embodiment will be described in detail with reference to fig. 3.
Fig. 3 is a process schematic diagram of the timing task processing method provided in the embodiment shown in fig. 2. As shown in fig. 3, the steps may be included as follows:
a1, after receiving a timed task request, a first request receiving instance generates a task ID for identifying the timed task request, and sends timing related information in the timed task request and the generated task ID to a first timed management instance, wherein the timed task request can comprise execution parameters of the timed task, and the execution parameters can comprise: the method comprises the steps of specific tasks, task starting time, task execution interval duration, task execution interval time unit, task ending time and other parameters, when a first timing management instance receives execution parameters in a timing task request and an identification ID for identifying the timing task, writing a database or modifying or deleting information in the database according to the specific execution parameters, for example, when the timing task request is a timing task creation request, the first timing management instance writes related information of the timing task creation into the database; when the timing task request is a timing task modification request, the first timing management instance modifies corresponding information in the database according to the related information of the timing task modification; when the timing task request is a deleting timing task request, the first timing management instance deletes corresponding information in the database according to the related information of the deleting timing task request.
b1, when the timing task request is a timing task request modification or a timing task request deletion, the first timing management example reads timing task information to be modified or cancelled from the database, performs presence check, and feeds back a result of the presence check to the first request receiving example.
c1, the first timing management instance sends a first delay time period for triggering the first timing task and the first task ID to a distributed cache cluster, and the second timing management instance obtains a second task ID for identifying a second timing task to be triggered from the distributed cache cluster and obtains a second execution parameter of the second timing task from the database.
d1, after obtaining a second task ID and a second execution parameter, the second timing management instance sends the second task ID and the second execution parameter to a first task execution instance, and the first task execution instance executes the second timing task according to the second task ID and the second execution parameter.
And e1, when the first request receiving instance receives the instant task request, the task execution information in the instant task request is directly sent to the first task execution instance for execution.
In this embodiment, the first execution parameter and the first task ID sent by the first request receiving instance are received, and the database is written in or the information in the database is modified or deleted according to the first execution parameter; transmitting a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and sending the second task ID and the second execution parameter to a first task execution instance, wherein the first task execution instance executes the second timing task according to the second task ID and the second execution parameter, and the timing task is uniformly distributed by the distributed cache cluster, so that multi-instance expansion can be supported, and containerized deployment and migration are supported.
Fig. 4 is a flow chart of a timing task processing method according to another embodiment of the present application, and on the basis of the embodiment shown in fig. 1, this embodiment is another possible implementation manner. As shown in fig. 4, the timing task processing method provided in this embodiment may include:
Step S401, writing, by a second request receiving instance, the first execution parameter and the first task ID into a distributed message middleware cluster.
Step S402, reading the first execution parameter and the first task ID from the distributed message middleware cluster.
Specifically, in this embodiment, when the second request receiving instance receives the timing task request, the first execution parameter and the first task ID are written into the distributed message middleware cluster, so that the distributed message middleware cluster returns a confirmation feedback to the second request receiving instance, and the situation that the first timing management instance directly returns the confirmation feedback to the second request receiving instance but fails to process the corresponding timing management request in time is avoided, so that the distributed message middleware cluster can provide a buffering process for receiving the timing task request to relieve the processing pressure of the first timing management instance.
Step S403, transmitting the first delay period for triggering the first timing task and the first task ID to a distributed cache cluster.
Step S404, acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database.
Steps S403 to S404 are the same as steps S102 to S103 in fig. 1, and are not described here again.
And step S405, writing the instant task information corresponding to the second task ID and the second execution parameter into the distributed message middleware cluster.
And step S406, reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
Similarly, the distributed message middleware cluster may also provide a buffering process between the second timing management instance and the second task execution instance to relieve processing pressure of the second task execution instance.
The present embodiment is described in detail below with reference to fig. 5.
Fig. 5 is a process schematic diagram of the timing task processing method provided in the embodiment shown in fig. 4. As shown in fig. 5, the steps may be included as follows:
and a2, after receiving the timing task request, if the received timing task request is a timing task modification request or a timing task deletion request, the second request receiving module reads timing task related information needing modification or cancellation from the database to perform presence verification.
And b2, the second request receiving module writes the timing task execution parameters in the received timing task request and the task ID generated by the second request receiving module for the received timing task request into the distributed message middleware cluster.
And c2, the first timing management module reads timing task execution parameters from the distributed message middleware cluster.
d2, the first timing management module newly adds timing task execution parameters in the database or modifies or deletes corresponding timing task execution parameters in the database
e2, adding the timing task time and ID of the next trigger in a delay queue of the distributed cache cluster by the first timing management module, or taking out the current trigger timing task ID from the delay queue by the second timing management module, and acquiring the execution parameters of the current trigger timing task from a database by d 2.
f2, the second timing management module writes the instant task information corresponding to the currently triggered timing task into the distributed message middleware cluster.
And g2, the second task execution instance reads the instant task information in the distributed message middleware cluster and operates.
As can be seen from fig. 5, the request receiving instance, the timing management instance and the task executing instance in this embodiment are completely decoupled, so when the logic of task execution needs to be modified or other channels of task execution need to be added, only the task implementing module needs to be updated, so as to implement lightweight deployment.
In this embodiment, since the execution of the timing task is uniformly distributed by the distributed cache cluster, multiple instance expansion can be supported, containerized deployment and migration can be supported, and the addition of the distributed message middleware cluster can provide a buffering process for receiving the timing task request, so as to relieve the processing pressure of the first timing management instance. In addition, because the request receiving instance, the timing management instance and the task execution instance are completely decoupled, when the logic of task execution needs to be modified or other channels of task execution are added, only the task implementation module needs to be updated, so that lightweight deployment is realized.
Fig. 6 is a flow chart of a method for processing a timing task according to another embodiment of the present application, where, based on the embodiment shown in fig. 4, a distributed message middleware cluster includes a timing queue and an instant queue. As shown in fig. 6, the timing task processing method provided in this embodiment may include:
step S601, writing the first execution parameter and the first task ID into the timing queue through a second request receiving instance.
Step S602, reading the first execution parameter and the first task ID from the timing queue.
Step S603, transmitting a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster.
Step S604, obtaining a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtaining a second execution parameter of the second timing task from the database.
Steps S603-S604 are the same as steps S102-S103 in fig. 1, and are not described here again.
Step S605, writing the instant task information and the second execution parameter into the instant queue.
Step S606, reading the instant task information from the instant queue through a second task execution instance, and executing the second timing task according to the instant task information.
Specifically, the request task receiving instance writes the received timing task related information and instant task related information into two different topics, such as topic a and topic B, of the distributed message middleware cluster, so as to be respectively read by the timing management instance and the task execution instance, and then returns a response.
The timing management instance is responsible for consuming timing rule information from the topic A, writing or modifying the timing rule information into a database for storage, and writing the next timing task ID into a distributed delay queue of the distributed cache cluster. In addition, the timing management example also has a thread pool, the task ID to be triggered is obtained from the delay queue, the specific information of the task is read from the database, the next trigger time is calculated, the trigger time is written into the delay queue of the distributed cache cluster, and the specific information is written into the topic B of the distributed message middleware cluster.
In the embodiment, the timing queue and the instant queue are divided in the distributed message middleware cluster, so that the classified management of data can be realized, and the reading of the timing management instance and the task execution instance is facilitated.
Fig. 7 is a flow chart of a timing task processing method according to another embodiment of the present application, where, based on the embodiment shown in fig. 1, a distributed cache cluster includes a first queue, a second queue, and a third queue, where the first queue and the third queue are delay queues, and the second queue is a non-delay queue. As shown in fig. 7, the timing task processing method provided in this embodiment may include:
step S701, acquiring a first execution parameter of a first timing task and a first task identifier ID for identifying the first timing task, and writing a database or modifying or deleting information in the database according to the first execution parameter.
In this embodiment, S701 and S101 are similar, and are not described here again.
Step S702, transmitting the first delay period to the distributed cache cluster, and storing the first task ID in the first queue.
Step S703, after the expiration of the first delay period, transferring, by the distributed cache cluster, the first task ID from the first queue to the second queue, backing up the first task ID to the third queue, and setting a second delay period.
Step S704, obtaining the first task ID from the second queue, and obtaining the first execution parameter from the database.
Step S705, determining that the package transmission of the first task ID and the first execution parameter is successful, and sending a package transmission success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or after the second delay period expires, transferring the first task ID from the third queue into the second queue through the distributed cache cluster.
Step S706, triggering the execution of the first timing task according to the first task ID and the first execution parameter.
Specifically, since the distributed queue of the distributed cache cluster has no message signing mechanism, in order to ensure that the loss of timing task trigger caused by the downtime of an instance is avoided in the process before writing information into the distributed message middleware cluster, the embodiment writes the task ID currently being processed into another delay queue at the same time when acquiring the ID, and sets a timeout time. If the instance does not sign in the ID within the timeout period, the task ID is re-executed by another instance, ensuring that the task is not missing.
The present embodiment is described in detail below with reference to fig. 8.
Fig. 8 is a process schematic diagram of the timing task processing method provided in the embodiment shown in fig. 7. As shown in fig. 8, the steps may be included as follows:
a3, the first timing management instance stores the first task ID into the first queue, and sets the delay time to be a first delay period, for example, 1 hour.
b3, after the first delay period expires, the distributed cache cluster transfers the first task ID from the first queue to the second queue.
c3, backing up the first task ID to the third queue by the distributed cache cluster, and setting a second delay period, such as 30s.
d3, the second timing management instance obtains the first task ID from the second queue, obtains the first execution parameter from the database, and packages and sends the first execution parameter.
And e3.1, the second timing management instance determines that the package transmission of the first task ID and the first execution parameter is successful, and sends a package transmission success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue.
e3.2, a problem (e.g., downtime) occurs with the packet transmission, and the second delay period expires, the distributed cache cluster transferring the first task ID from the third queue into the second queue.
It will be appreciated that only one of the above e3.1 and e3.2 occurs and that a3 and d3 may be performed by two different timing management instances.
In this embodiment, a first queue, a second queue and a third queue are set in a distributed cache cluster, after the first delay period expires, the distributed cache cluster transfers the first task ID from the first queue to the second queue, backs up the first task ID to the third queue, sets a second delay period, and then a second timing management instance determines that the package transmission of the first task ID and the first execution parameter is successful, and sends a package transmission success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or when the second delay period expires, the distributed cache cluster transfers the first task ID from the third queue to the second queue, so that the task is ensured not to be lost, and the safe processing of the timed task is ensured.
Fig. 9 is a schematic structural diagram of a timed task processing device according to an embodiment of the present application. As shown in fig. 9, the prediction model modeling apparatus provided in this embodiment includes an acquisition module 91, a transmission module 92, and a triggering module 93, where,
An obtaining module 91, configured to obtain a first execution parameter of a first timing task and a first task identifier ID for identifying the first timing task, and write the database or modify or delete information in the database according to the first execution parameter;
a sending module 92, configured to send the first delay period for triggering the first timing task and the first task ID to a distributed cache cluster;
the obtaining module 91 is further configured to obtain a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and obtain a second execution parameter of the second timing task from the database; and
and a triggering module 93, configured to trigger execution of the second timing task according to the second task ID and the second execution parameter.
In one embodiment, the obtaining module 91 is specifically configured to receive the first execution parameter and the first task ID sent by the first request receiving instance.
In one embodiment, the sending module 92 is further configured to send the second task ID and the second execution parameter to a first task execution instance; and
The triggering module 93 is further configured to execute, by using a first task execution instance, the second timing task according to the second task ID and the second execution parameter.
In one embodiment, the obtaining module 91 is specifically configured to:
writing the first execution parameters and the first task ID into a distributed message middleware cluster through a second request receiving instance; and
and reading the first execution parameter and the first task ID from the distributed message middleware cluster.
In one embodiment, the triggering module 93 is specifically configured to:
writing the instant task information corresponding to the second task ID and the second execution parameter into the distributed message middleware cluster; and
and reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
In one embodiment, the distributed message middleware cluster comprises a timing queue and an instant queue;
the obtaining module 91 is specifically configured to:
writing the first execution parameter and the first task ID to the timing queue through the second request receiving instance; and
Reading the first execution parameter and the first task ID from the timing queue;
the triggering module 93 is specifically configured to:
writing the instant task information and the second execution parameters into the instant queue; and
and reading the instant task information from the instant queue through the second task execution instance.
In one embodiment, the distributed cache cluster includes a first queue, a second queue, and a third queue, where the first queue and the third queue are delay queues, and the second queue is a non-delay queue;
the sending module 92 is further configured to:
transmitting the first delay period to the distributed cache cluster and storing the first task ID in the first queue; and
after the first delay period expires, transferring the first task ID from the first queue to the second queue through the distributed cache cluster, backing up the first task ID to the third queue, and setting a second delay period;
the obtaining module 91 is further configured to obtain the first task ID from the second queue, and obtain the first execution parameter from the database; and
The triggering module 93 is further configured to:
determining that the packaging transmission of the first task ID and the first execution parameter is successful, and sending a packaging transmission success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or, after the second delay period expires, transferring, by the distributed cache cluster, the first task ID from the third queue into the second queue; and
and triggering the execution of the first timing task according to the first task ID and the first execution parameter.
In one embodiment, the obtaining module 91 is further configured to determine the first delay period according to the first execution parameter before sending the first delay period for triggering the first timing task and the first task ID to a distributed cache cluster.
In one embodiment, the distributed cache cluster includes redis or memcache.
In one embodiment, the distributed message middleware cluster includes kafka, activemq or rubbitmq.
The timing task processing device provided in each embodiment of the present application may be used to execute the method shown in each corresponding embodiment, and its implementation manner and principle are the same and will not be repeated.
The timing task processing method and the timing task processing device are applied to an intelligent management office platform in a source code, and the first execution parameter of a first timing task and a first task identification ID for identifying the first timing task are obtained, and a database is written in or information in the database is modified or deleted according to the first execution parameter; transmitting a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by the distributed cache cluster, so that multi-instance expansion can be supported, and containerized deployment and migration are supported.
According to an embodiment of the present application, there is also provided a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 10, a block diagram of an electronic device according to a timed task processing method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 10, the electronic device includes: one or more processors 1001, memory 1002, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 1001 is illustrated in fig. 10.
Memory 1002 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods of the timed task processing methods provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the timed task processing method provided herein.
The memory 1002 is used as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 91, the transmission module 92, and the trigger module 93 shown in fig. 9) corresponding to the methods of the timed task processing method in the embodiments of the present application. The processor 1001 executes various functional applications of the server and data processing, that is, implements the timed task processing method in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 1002.
Memory 1002 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device of the timed task processing method, and the like. In addition, the memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 1002 may optionally include memory located remotely from the processor 1001, which may be connected to the electronic device of the timed task processing method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the timed task processing method may further include: an input device 1003 and an output device 004. The processor 1001, memory 1002, input device 1003, and output device 1004 may be connected by a bus or other means, for example by a bus connection in fig. 10.
The input device 1003 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the timed task process, such as a touch screen, keypad, mouse, trackpad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, etc. input devices. The output means 1004 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task are obtained, and a database is written or information in the database is modified or deleted according to the first execution parameter; the first delay time period for triggering the first timing task and the first task ID are sent to a distributed cache cluster; acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and triggering the execution of the second timing task according to the second task ID and the second execution parameter, wherein the execution of the timing task is uniformly distributed by the distributed cache cluster, so that multi-instance expansion can be supported, and containerized deployment and migration are supported. It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (18)

1. A timed task processing method comprising:
acquiring a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task, and writing a database or modifying or deleting information in the database according to the first execution parameter;
transmitting a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster;
acquiring a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquiring a second execution parameter of the second timing task from the database; and
triggering the execution of the second timing task according to the second task ID and the second execution parameter;
The obtaining the first execution parameter of the first timing task and the first task identification ID for identifying the first timing task includes:
writing the first execution parameters and the first task ID into a distributed message middleware cluster through a second request receiving instance; and reading the first execution parameter and the first task ID from the distributed message middleware cluster;
the triggering the execution of the second timing task according to the second task ID and the second execution parameter includes:
writing the instant task information corresponding to the second task ID and the second execution parameter into the distributed message middleware cluster; and reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
2. The method of claim 1, wherein the obtaining the first execution parameter of the first timing task and the first task identification ID for identifying the first timing task comprises:
and receiving the first execution parameter and the first task ID sent by a first request receiving instance.
3. The method of claim 2, wherein the triggering execution of the second timed task according to the second task ID and the second execution parameter comprises:
Transmitting the second task ID and the second execution parameter to a first task execution instance; and executing, by the first task execution instance, the second timed task according to the second task ID and the second execution parameter.
4. The method of claim 1, wherein the distributed message middleware cluster comprises a timing queue and an instant queue;
the writing, by the second request receiving instance, the first execution parameter and the first task ID to the distributed message middleware cluster includes:
writing the first execution parameter and the first task ID into the timing queue through a second request receiving instance;
reading the first execution parameter and the first task ID from a distributed message middleware cluster, including:
reading the first execution parameter and the first task ID from the timing queue;
the writing the instant task information corresponding to the second task ID and the second execution parameter into the distributed message middleware cluster includes:
writing the instant task information and the second execution parameters into the instant queue; and
the reading the instant task information from the distributed message middleware cluster through a second task execution instance comprises the following steps:
And reading the instant task information from the instant queue through the second task execution instance.
5. The method of any of claims 1-4, wherein the distributed cache cluster includes a first queue, a second queue, and a third queue therein, the first queue and the third queue being deferred queues, the second queue being non-deferred queues;
the sending the first delay period for triggering the first timing task and the first task ID to a distributed cache cluster includes:
transmitting the first delay period to the distributed cache cluster and storing the first task ID in the first queue;
the method further comprises the steps of:
after the first delay period expires, transferring the first task ID from the first queue to the second queue through the distributed cache cluster, backing up the first task ID to the third queue, and setting a second delay period;
obtaining a second task ID for identifying a second timed task to be triggered from the distributed cache cluster, and obtaining a second execution parameter of the second timed task from the database, including:
Acquiring the first task ID from the second queue and acquiring the first execution parameter from the database; and
the method further comprises the steps of:
determining that the packaging transmission of the first task ID and the first execution parameter is successful, and sending a packaging transmission success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or after the second delay period expires, transferring the first task ID from the third queue into the second queue through the distributed cache cluster;
the triggering the execution of the second timing task according to the second task ID and the second execution parameter includes:
and triggering the execution of the first timing task according to the first task ID and the first execution parameter.
6. The method of claim 5, wherein before the sending the first delay period for triggering the first timed task and the first task ID to a distributed cache cluster, further comprises:
and determining the first delay time period according to the first execution parameter.
7. The method of any of claims 1-4, 6, wherein the distributed cache cluster comprises redis or memcache.
8. The method of any of claims 1 or 4, wherein the distributed message middleware cluster comprises kafka, activemq or rubbitmq.
9. A timed task processing device comprising:
the acquisition module is used for acquiring a first execution parameter of a first timing task and a first task identification ID for identifying the first timing task, and writing a database or modifying or deleting information in the database according to the first execution parameter;
a sending module, configured to send a first delay period for triggering the first timing task and the first task ID to a distributed cache cluster;
the acquisition module is further configured to acquire a second task ID for identifying a second timing task to be triggered from the distributed cache cluster, and acquire a second execution parameter of the second timing task from the database; and
the triggering module is used for triggering the execution of the second timing task according to the second task ID and the second execution parameter;
the acquisition module is specifically configured to:
writing the first execution parameters and the first task ID into a distributed message middleware cluster through a second request receiving instance; and reading the first execution parameter and the first task ID from the distributed message middleware cluster;
The triggering module is specifically configured to:
writing the instant task information corresponding to the second task ID and the second execution parameter into the distributed message middleware cluster; and reading the instant task information from the distributed message middleware cluster through a second task execution instance, and executing the second timing task according to the instant task information.
10. The apparatus of claim 9, wherein the obtaining module is specifically configured to receive the first execution parameter and the first task ID sent by the first request receiving instance.
11. The apparatus of claim 10, wherein:
the sending module is further configured to send the second task ID and the second execution parameter to a first task execution instance; and
the triggering module is further configured to execute, by using the first task execution instance, the second timing task according to the second task ID and the second execution parameter.
12. The apparatus of claim 9, wherein the distributed message middleware cluster comprises a timing queue and an instant queue;
the acquisition module is specifically configured to:
writing the first execution parameter and the first task ID to the timing queue through the second request receiving instance; and
Reading the first execution parameter and the first task ID from the timing queue;
the triggering module is specifically configured to:
writing the instant task information and the second execution parameters into the instant queue; and
and reading the instant task information from the instant queue through the second task execution instance.
13. The apparatus of any of claims 9-12, wherein the distributed cache cluster includes a first queue, a second queue, and a third queue therein, the first queue and the third queue being deferred queues, the second queue being non-deferred queues;
the sending module is further configured to:
transmitting the first delay period to the distributed cache cluster and storing the first task ID in the first queue; and
after the first delay period expires, transferring the first task ID from the first queue to the second queue through the distributed cache cluster, backing up the first task ID to the third queue, and setting a second delay period;
the acquisition module is further configured to acquire the first task ID from the second queue, and acquire the first execution parameter from the database; and
The trigger module is further configured to:
determining that the packaging transmission of the first task ID and the first execution parameter is successful, and sending a packaging transmission success notification to the distributed cache cluster, so that the distributed cache cluster deletes the first task ID in the third queue; or, after the second delay period expires, transferring, by the distributed cache cluster, the first task ID from the third queue into the second queue; and
and triggering the execution of the first timing task according to the first task ID and the first execution parameter.
14. The apparatus of claim 13, wherein the means for obtaining is further configured to determine the first delay period based on the first execution parameter before sending the first delay period for triggering the first timing task and the first task ID to a distributed cache cluster.
15. The apparatus of any of claims 9-12, 14, wherein the distributed cache cluster comprises a redis or memcache.
16. The apparatus of any of claims 9 or 12, wherein the distributed message middleware cluster comprises kafka, activemq or rubbitmq.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202010615233.8A 2020-06-30 2020-06-30 Timed task processing method, device, equipment and storage medium Active CN111782365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010615233.8A CN111782365B (en) 2020-06-30 2020-06-30 Timed task processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010615233.8A CN111782365B (en) 2020-06-30 2020-06-30 Timed task processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111782365A CN111782365A (en) 2020-10-16
CN111782365B true CN111782365B (en) 2024-03-08

Family

ID=72759995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010615233.8A Active CN111782365B (en) 2020-06-30 2020-06-30 Timed task processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111782365B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579269B (en) * 2020-12-04 2024-07-02 深圳前海微众银行股份有限公司 Timing task processing method and device
CN113448699A (en) * 2020-12-30 2021-09-28 北京新氧科技有限公司 Distributed timed task processing system, method and related device
CN113792097A (en) * 2021-01-26 2021-12-14 北京沃东天骏信息技术有限公司 Delay trigger estimation method and device for display information, medium and electronic equipment
CN112966005B (en) * 2021-03-08 2023-07-25 平安科技(深圳)有限公司 Timing message sending method, device, computer equipment and storage medium
CN113159590A (en) * 2021-04-27 2021-07-23 海信集团控股股份有限公司 Medication management method, server and mobile terminal
CN113742044A (en) * 2021-09-09 2021-12-03 平安科技(深圳)有限公司 Timed task management method, device, equipment and storage medium
CN113934731A (en) * 2021-11-05 2022-01-14 盐城金堤科技有限公司 Task execution method and device, storage medium and electronic equipment
CN116431318A (en) * 2023-06-13 2023-07-14 云账户技术(天津)有限公司 Timing task processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108287751A (en) * 2017-01-09 2018-07-17 阿里巴巴集团控股有限公司 Task executing method and device, distributed system
CN109582466A (en) * 2017-09-29 2019-04-05 北京金山软件有限公司 A kind of timed task executes method, distributed server cluster and electronic equipment
CN110347492A (en) * 2019-07-15 2019-10-18 深圳前海乘势科技有限公司 Distributed task dispatching method and apparatus based on time parameter method
WO2019237531A1 (en) * 2018-06-14 2019-12-19 平安科技(深圳)有限公司 Network node monitoring method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10678614B2 (en) * 2017-11-30 2020-06-09 Oracle International Corporation Messages with delayed delivery in an in-database sharded queue

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108287751A (en) * 2017-01-09 2018-07-17 阿里巴巴集团控股有限公司 Task executing method and device, distributed system
CN109582466A (en) * 2017-09-29 2019-04-05 北京金山软件有限公司 A kind of timed task executes method, distributed server cluster and electronic equipment
WO2019237531A1 (en) * 2018-06-14 2019-12-19 平安科技(深圳)有限公司 Network node monitoring method and system
CN110347492A (en) * 2019-07-15 2019-10-18 深圳前海乘势科技有限公司 Distributed task dispatching method and apparatus based on time parameter method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于集群节点间即时拷贝的会话同步技术研究;曹海涛;胡牧;蒋厚明;;计算机系统应用(03);全文 *

Also Published As

Publication number Publication date
CN111782365A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111782365B (en) Timed task processing method, device, equipment and storage medium
CN110806923B (en) Parallel processing method and device for block chain tasks, electronic equipment and medium
CN111694857B (en) Method, device, electronic equipment and computer readable medium for storing resource data
CN106657314A (en) Cross-data center data synchronization system and method
CN111259205B (en) Graph database traversal method, device, equipment and storage medium
CN110515774A (en) Generation method, device, electronic equipment and the storage medium of memory image
CN111782147B (en) Method and device for cluster expansion and contraction capacity
CN113364877B (en) Data processing method, device, electronic equipment and medium
CN111770176B (en) Traffic scheduling method and device
CN112565356A (en) Data storage method and device and electronic equipment
CN110750419B (en) Offline task processing method and device, electronic equipment and storage medium
CN115576684A (en) Task processing method and device, electronic equipment and storage medium
CN111339187A (en) Data processing method, device, equipment and storage medium based on intelligent contract
CN111600790A (en) Block chain based message processing method, device, equipment and storage medium
CN111782341A (en) Method and apparatus for managing clusters
EP3859529A2 (en) Backup management method and system, electronic device and medium
CN111782357B (en) Label control method and device, electronic equipment and readable storage medium
CN113010498A (en) Data synchronization method and device, computer equipment and storage medium
US9965538B2 (en) Early thread return with secondary event writes
CN112486644A (en) Method, apparatus, device and storage medium for generating information
CN111767149A (en) Scheduling method, device, equipment and storage equipment
CN111966877A (en) Front-end service method, device, equipment and storage medium
CN111352944B (en) Data processing method, device, electronic equipment and storage medium
CN112799585B (en) Data processing method, device, electronic equipment and readable storage medium
WO2022068203A1 (en) Method and apparatus for determining reservation information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant