CN108733459B - Distributed timing method, server and system - Google Patents

Distributed timing method, server and system Download PDF

Info

Publication number
CN108733459B
CN108733459B CN201710240842.8A CN201710240842A CN108733459B CN 108733459 B CN108733459 B CN 108733459B CN 201710240842 A CN201710240842 A CN 201710240842A CN 108733459 B CN108733459 B CN 108733459B
Authority
CN
China
Prior art keywords
task
lock
task execution
target
execution node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710240842.8A
Other languages
Chinese (zh)
Other versions
CN108733459A (en
Inventor
朱鑫鑫
刘明
吴春颖
霍丙言
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710240842.8A priority Critical patent/CN108733459B/en
Publication of CN108733459A publication Critical patent/CN108733459A/en
Application granted granted Critical
Publication of CN108733459B publication Critical patent/CN108733459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/62Establishing a time schedule for servicing the requests
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a distributed timing method, a server and a system, wherein the method comprises the following steps: determining a first target task execution node, and determining a second target task execution node under the condition that a preset condition is met, wherein the preset condition is that the target time length is greater than or equal to the preset time length, task lock release request information sent by the first target task execution node is not received within the target time length, and a task lock is sent to the second target task execution node and used for indicating the second target task execution node to execute the current task. Therefore, the task distributed lock service node can determine that the first target task execution node is down, and the task distributed lock service node can transfer the task on the first target task execution node to the second target task execution node, so that the global execution instance of the current task can be controlled, and the management efficiency of each task execution node is improved.

Description

Distributed timing method, server and system
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a distributed timing method, a server, and a system.
Background
The concept of a multi-instance timing task refers to a scenario that the same timing task can run multiple instances at the same time, and the multi-instance timing task is particularly widely applied in a business system, such as timing calculation report forms, timing notification related system processing transactions, timing consumption of messages from a queue and corresponding processing, and the like, of a series of asynchronous background processing tasks. In the process of applying multiple instance timing tasks in particular, managing the multiple instance timing tasks is a great problem, such as how many instances a task can execute at the same time, how to process a task execution time exceeds an execution period, how to process abnormal exit of the task, and how to manage and control the task execution on multiple machines.
In the prior art, a single machine system can be used for realizing the management of multi-instance timing tasks, the single machine system comprises an agent node, the agent node is resident in a background to circularly check whether the tasks are expired, if the tasks are expired, the agent node can execute the tasks, and in a specific application, the single machine system can be deployed on a plurality of machines to realize the multi-instance execution.
In the prior art, if one machine in the single machine system is down, the task on the down machine cannot be automatically migrated, and the single machine system can be deployed on a plurality of machines, but the task migration is seriously realized by relying on manpower, so that the efficiency of task execution and the efficiency of task migration are reduced, the situation of an execution instance of a single task is uncontrollable, and the management is difficult.
Disclosure of Invention
The embodiment of the invention provides a distributed timing method, a server and a system capable of realizing automatic migration of tasks.
An embodiment of the present invention provides a method for distributed timing, including:
determining a first target task execution node, wherein the first target task execution node is a task execution node which has received a task lock, and the task lock is used for indicating the first target task execution node to execute a current task;
under the condition that a preset condition is met, determining a second target task execution node, wherein the first target task execution node and the second target task execution node are different task execution nodes, the preset condition is that a target time length is longer than or equal to a preset time length, task lock release request information sent by the first target task execution node is not received within the target time length, the target time length is the difference between the current time and a target starting time point, the target starting time point is the time point for sending the task lock, and the task lock release request information is used for indicating that the first target task execution node has executed to finish the current task;
And sending the task lock to the second target task execution node, wherein the task lock is used for indicating the second target task execution node to execute the current task.
A second aspect of an embodiment of the present invention provides a method for distributed timing, including:
receiving current task execution time information sent by a task distribution center;
configuring a timer according to the current task execution time information, wherein the timer is used for timing the starting time for executing the current task;
if the time counted by the timer is up, sending task lock request information to a task distributed lock service node, wherein the task lock request information is used for requesting the task distributed lock service node to send a task lock, and the task lock is used for indicating to execute a current task;
and if the task lock is received, executing the current task according to the task lock.
A third aspect of an embodiment of the present invention provides a server, including:
the first determining unit is used for determining a first target task executing node, wherein the first target task executing node is a task executing node which has received a task lock, and the task lock is used for indicating the first target task executing node to execute a current task;
The second determining unit is configured to determine a second target task execution node when a preset condition is met, where the first target task execution node and the second target task execution node are different task execution nodes, the preset condition is that a target time length is greater than or equal to a preset time length, task lock release request information sent by the first target task execution node is not received within the target time length, the target time length is a difference between a current time and a target starting time point, the target starting time point is a time point when the task lock is sent, and the task lock release request information is used to indicate that the first target task execution node has executed to complete the current task;
the first sending unit is used for sending the task lock to the second target task execution node, and the task lock is used for indicating the second target task execution node to execute the current task.
A fourth aspect of an embodiment of the present invention provides a server, including:
the first receiving unit is used for receiving the current task execution time information sent by the task distribution center;
the configuration unit is used for configuring a timer according to the current task execution time information, and the timer is used for timing the starting time for executing the current task;
The first sending unit is used for sending task lock request information to the task distributed lock service node if the time counted by the timer is up, wherein the task lock request information is used for requesting the task distributed lock service node to send a task lock, and the task lock is used for indicating to execute a current task;
and the second receiving unit is used for executing the current task according to the task lock if the task lock is received.
A fifth aspect of an embodiment of the present invention provides a distributed timing system, including: the system comprises a configuration center, a monitoring center, a task distribution center, a task distributed lock service node and a plurality of task execution nodes;
the configuration center is used for sending configuration information to the task distribution center, the configuration information comprises current task execution time information and a task execution node list, and the task execution node list comprises a plurality of task execution nodes for executing the current task;
the task distribution center is used for sending the configuration information to the task distributed lock service node and the plurality of task execution nodes;
the monitoring center is used for monitoring the task distributed lock service node and the plurality of task execution nodes;
The task distributed lock service node is configured to perform the method shown in the first aspect of the embodiment of the present invention, and the task execution node is configured to perform the method shown in the second aspect of the embodiment of the present invention.
A sixth aspect of an embodiment of the present invention provides a server, including:
one or more processors, a memory, a bus system, and one or more programs, the processors and the memory being connected by the bus system;
wherein the one or more programs are stored in the memory, the one or more programs comprising instructions, which when executed by the server, cause the server to perform the method of the first aspect of the embodiments of the invention.
A seventh aspect of an embodiment of the present invention provides a server, including:
one or more processors, a memory, a bus system, and one or more programs, the processors and the memory being connected by the bus system;
wherein the one or more programs are stored in the memory, the one or more programs comprising instructions, which when executed by the server, cause the server to perform the method of the second aspect of the embodiments of the invention.
The embodiment of the invention provides a distributed timing method, a server and a system, wherein the method comprises the following steps: determining a first target task execution node, and determining a second target task execution node under the condition that a preset condition is met, wherein the preset condition is that a target time length is greater than or equal to a preset time length, task lock release request information sent by the first target task execution node is not received within the target time length, the task lock is sent to the second target task execution node, and the task lock is used for indicating the second target task execution node to execute the current task. Therefore, the task distributed lock service node can determine that the first target task execution node is down, and the task distributed lock service node can transfer the task on the first target task execution node to the second target task execution node, so that the global execution instance of the current task can be controlled, and the management efficiency of each task execution node is improved.
Drawings
FIG. 1 is a schematic diagram of a distributed timing system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a server according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a method for distributed timing according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps of a method for distributed timing according to another embodiment of the present invention;
FIG. 5 is a flowchart illustrating steps of a method for distributed timing according to another embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an embodiment of a task distributed lock service node according to the present invention;
fig. 7 is a schematic structural diagram of a task execution node according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method for improving task migration efficiency and facilitating management of distributed timing, and in order to better understand the method shown in the embodiment of the invention, the specific structure of a distributed timing system applying the method shown in the embodiment of the invention is described below:
as shown in fig. 1, the distributed timing system shown in the present embodiment includes:
the system comprises a configuration center 101, a monitoring center 102 and a Task distribution center 103 (English full name: task Distribute Server, english abbreviated: TDS), task distributed Lock service nodes 104 (English full name: task Lock Server, english abbreviated: TLS) and a plurality of Task execution nodes 105 (English full name: task Exec Agent, english abbreviated: TEA).
The distributed timing system shown in this embodiment is used to perform the method of distributed timing shown in this embodiment, where the distributed timing system refers to a plurality of applications that can be run at the same time, and each application is called a task.
A task is a logical concept, meaning a task that is performed by a piece of software, or a series of operations that together achieve a certain purpose.
The configuration center 101, the monitoring center 102, the task distribution center 103, the task distributed lock service node 104, and the task execution node 105 shown in this embodiment may be deployed on a server.
For example, if the configuration information corresponding to the configuration center 101 is configured on a server, the server may serve as the configuration center 101 in the distributed timing system.
For another example, the configuration information corresponding to the monitoring center 102 is configured on a server, and then the server can be used as the monitoring center 102 in the distributed timing system.
It should be noted that, in the embodiment, the configuration center 101 and the monitoring center 102 may be configured on different servers, or may be configured on the same server, which is not limited in the embodiment.
Similarly, the configuration information corresponding to the task distribution center 103 may be configured on a server, and then the server may serve as the task distribution center 103 in the distributed timing system.
For another example, the configuration information corresponding to the task distributed lock service node 104 is configured on a server, and then the server can be used as the task distributed lock service node 104 in the distributed timing system.
It should be noted that, in this embodiment, the task distribution center 103 and the task distributed lock service node 104 may be configured on different servers, or may be configured on the same server, which is not limited in this embodiment.
Similarly, the configuration information corresponding to the task execution node 105 may be configured on a server, and the server may then serve as the task execution node 105 in the distributed timing system.
In the present embodiment, different ones of the task execution nodes 105 are configured on different servers, so that different servers as the task execution nodes 105 are used to execute different instances of the same task.
The specific structure of the server shown in this embodiment is described in detail below:
The specific structure of the server shown in this embodiment is described below with reference to fig. 2, where fig. 2 is a schematic structural diagram of an embodiment of the server provided by the present invention.
The server comprises an input unit 205, a processor 203, an output unit 201, a communication unit 207, a memory 204, a radio frequency circuit 208, etc.
The components communicate via one or more buses. It will be appreciated by those skilled in the art that the configuration of the server shown in fig. 2 is not limiting of the invention, and that it may be a bus-like configuration, a star-like configuration, or may include more or fewer components than shown, or may be a combination of certain components, or a different arrangement of components.
The server includes:
an output unit 201 for outputting an image to be displayed.
Specifically, the output unit 201 includes, but is not limited to, a video output unit 2011 and a sound output unit 2012.
The image output unit 2011 is configured to output text, pictures and/or video. The image output unit 2011 may include a display panel, for example, a display panel configured in a form of a liquid crystal display (hereinafter referred to as LCD, in english: liquid Crystal Display), an Organic Light-Emitting Diode (hereinafter referred to as OLED), a field emission display (hereinafter referred to as FED, in english: field emission display), or the like. Alternatively, the image output unit 2011 may include a reflective display, such as an electrophoretic (electrophoretic) display, or a display using optical interferometric modulation (english: interferometric Modulation of Light).
The image output unit 2011 may include a single display or multiple displays with different sizes. In the embodiment of the invention, the touch screen may also be used as the display panel of the output unit 201.
For example, when the touch screen detects a gesture operation of touch or proximity thereon, the gesture operation is transmitted to the processor 203 to determine the type of touch event, and then the processor 203 provides a corresponding visual output on the display panel according to the type of touch event. Although in fig. 2, the input unit 205 and the output unit 201 implement the input and output functions of the server as two separate components, in some embodiments, the touch screen may be integrated with the display panel to implement the input and output functions of the server. For example, the image output unit 2011 may display various graphical user interfaces (abbreviated as GUI in english: graphical User Interface) as virtual control components, including but not limited to windows, scroll shafts, icons, and scrapbooks, for the user to operate in a touch manner.
In an embodiment of the present invention, the image output unit 2011 includes a filter and an amplifier for filtering and amplifying the video output by the processor 203. The sound output unit 2012 includes a digital-to-analog converter for converting the audio signal output by the processor 203 from a digital format to an analog format.
And the processor 203 is used for running corresponding codes and processing the received information to generate and output a corresponding interface.
In particular, the processor 203 is a control center of the server, and uses various interfaces and lines to connect various parts of the entire server, through running or executing software programs and/or modules stored in the memory, and invoking data stored in the memory to perform various functions of the server and/or process data. The processor 203 may be formed by an integrated circuit (i.e., an IC for short, english: integrated Circuit), for example, may be formed by a single packaged IC, or may be formed by a plurality of packaged ICs connected to the same function or different functions.
For example, the processor 203 may include only a central processing unit (in english: central Processing Unit, abbreviated as CPU), a graphics processing unit (in english: graphics Processing Unit, abbreviated as GPU), a digital signal processing unit (in english: digital Signal Processor, abbreviated as DSP), and a control chip (e.g., baseband chip) in the communication unit. In the embodiment of the invention, the CPU can be a single operation core or can comprise multiple operation cores.
A memory 204 for storing code and data, the code for execution by the processor 203.
In particular, the memory 204 may be used to store software programs and modules, and the processor 203 executes the software programs and modules stored in the memory 204 to perform various functional applications of the server and to implement data processing. The memory 204 mainly includes a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs required for at least one function, such as a sound playing program, an image playing program, and the like; the data storage area may store data (such as audio data, phonebooks, etc.) created according to the use of the server, etc.
In particular embodiments of the present invention, memory 204 may include volatile Memory such as nonvolatile dynamic random access Memory (Nonvolatile Random Access Memory, NVRAM), phase Change random access Memory (PRAM), magnetoresistive random access Memory (Magetoresistive RAM, MRAM), etc., and may also include nonvolatile Memory such as at least one magnetic disk Memory device, electrically erasable programmable read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), flash Memory device such as flash Memory (NOR Memory) or flash Memory (NAND flash Memory).
The nonvolatile memory stores an operating system and application programs executed by the processor 203. The processor 203 loads operating programs and data from the nonvolatile memory into memory and stores digital content in mass storage. The operating system includes various components and/or drivers for controlling and managing conventional system tasks, such as memory management, storage device control, power management, etc., as well as facilitating communication between the various software and hardware.
In the embodiment of the invention, the operating system may be an Android system of Google corporation, an iOS system developed by Apple corporation, a Windows operating system developed by Microsoft corporation, or an embedded operating system such as Vxworks.
The application programs include any application installed on a server including, but not limited to, browser, email, instant messaging service, word processing, keyboard virtualization, widgets (widgets), encryption, digital rights management, voice recognition, voice replication, positioning (e.g., functions provided by a global positioning system), music playing, and the like.
An input unit 205, configured to implement interaction between the user and the server and/or input information into the server.
For example, the input unit 205 may receive numeric or character information input by a user to generate signal inputs related to user settings or function controls. In the embodiment of the present invention, the input unit 205 may be a touch screen, or may be other man-machine interaction interfaces, such as physical input keys, a microphone, or other external information capturing devices, such as a camera.
The touch screen disclosed by the embodiment of the invention can collect the operation actions of the user touching or approaching the touch screen. Such as a user operating on the touch screen or near the touch screen using any suitable object or accessory such as a finger, stylus, etc., and actuating the corresponding connection means according to a predetermined program. Alternatively, the touch screen may comprise two parts, a touch detection device and a touch controller. The touch detection device detects touch operation of a user, converts the detected touch operation into an electric signal and transmits the electric signal to the touch controller; the touch controller receives the electrical signal from the touch detection device and converts it to touch point coordinates, which are sent to the processor 203.
The touch controller may also receive commands from the processor 203 and execute the commands. In addition, the touch screen may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
In other embodiments of the present invention, the physical input keys employed by the input unit 205 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, etc. The input unit 205 in the form of a microphone may collect speech input by a user or the environment and convert it into commands in the form of electrical signals, which are executable by the processor 203.
In other embodiments of the present invention, the input unit 205 may be various sensing devices, such as hall devices, for detecting physical quantities of the server, such as force, moment, pressure, stress, position, displacement, speed, acceleration, angle, angular velocity, revolution, rotation speed, and time of change of working state, etc., and converting the physical quantities into electric quantities for detection and control. Other sensing devices may also include gravity sensors, tri-axial accelerometers, gyroscopes, electronic compasses, ambient light sensors, proximity sensors, temperature sensors, humidity sensors, pressure sensors, heart rate sensors, fingerprint identifiers, and the like.
A communication unit 207 for establishing a communication channel through which the server is connected to a remote server and downloading media data from the remote server. The communication unit 207 may include a wireless LAN (hereinafter referred to as "Wireless Local Area Network") module, a bluetooth module, a baseband module, and a Radio Frequency (hereinafter referred to as "RF") circuit corresponding to the communication module, for performing wireless LAN communication, bluetooth communication, infrared communication, and/or cellular communication system communication, such as wideband code division multiple access (hereinafter referred to as "Wideband Code Division Multiple Access"), and/or high-speed downlink packet access (hereinafter referred to as "High Speed Downlink Packet Access"), and/or HSDPA. The communication module is used for controlling the communication of each component in the server and can support direct memory access.
In various embodiments of the present invention, the various communication modules in the communication unit 207 are typically in the form of integrated circuit chips (english full name: integrated Circuit Chip) and can be selectively combined without necessarily including all communication modules and corresponding antenna groups. For example, the communication unit 207 may include only a baseband chip, a radio frequency chip, and a corresponding antenna to provide a communication function in one cellular communication system. The server may be connected to a Cellular Network (english full name) or the internet via a wireless communication connection established by the communication unit 207, such as a wireless local area Network access or a WCDMA access. In some alternative embodiments of the present invention, a communication module, such as a baseband module, in the communication unit 207 may be integrated into the processor 203, typically as an apq+mdm series platform provided by the high-pass (Qualcomm) company.
The radio frequency circuit 208 is used for receiving and transmitting signals in the process of information receiving and transmitting or talking. For example, after receiving downlink information of the base station, the downlink information is processed by the processor 203; in addition, the data of the design uplink is sent to the base station. Typically, the radio frequency circuitry 208 includes well known circuitry for performing these functions, including but not limited to an antenna system, a radio frequency transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a Codec (Codec) chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. In addition, the radio frequency circuitry 208 may also communicate with networks and other devices via wireless communications.
The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (english: global System of Mobile communication, english: GSM), general packet radio service (english: general Packet Radio Service, english: GPRS), code division multiple access (english: code Division Multiple Access, CDMA), wideband code division multiple access (english: wideband Code Division Multiple Access, english: WCDMA), high speed uplink packet access (english: high Speed Uplink Packet Access, english: HSUPA), long term evolution (english: long Term Evolution, english: LTE), email, short message service (english: short Messaging Service, english: SMS), and the like.
A power supply 209 for powering the various components of the server to maintain its operation. As a general understanding, the power supply 209 may be a built-in battery, such as a common lithium ion battery, a nickel metal hydride battery, etc., and also includes an external power supply, such as an AC adapter, etc., that directly supplies power to the server. In some embodiments of the present invention, the power supply 209 may be defined more broadly, and may include, for example, a power management system, a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light emitting diode), and any other components associated with power generation, management, and distribution of the server.
Based on the distributed timing system shown in fig. 1 and the server shown in fig. 2, a detailed description will be given below of a specific execution flow of the distributed timing method shown in the embodiment of the present invention with reference to fig. 3:
step 301, the configuration center configures the current task to generate configuration information.
Specifically, a developer can implement configuration of the current task through the configuration center.
More specifically, the developer may input, to the configuration center, parameter information such as an execution parameter of the current task and execution time information of the current task, which are included in the configuration information.
Wherein the execution parameters of the current task are used to instruct the task execution node 105 how to execute the current task.
The current task execution time information is used to indicate the time of each task execution node 105 to execute the current task.
The configuration information shown in this embodiment further includes a task execution node list, where the task execution node list includes a plurality of task execution nodes for executing the current task.
That is, the task execution node list shown in this embodiment establishes a correspondence between a current task and a task execution node for executing the current task.
The following describes how to configure the task execution node list:
first determining a maximum number N1 of instances required to perform a current task;
the number of task execution lists included in the task execution node list is determined to be N2, so that in order to ensure that any task execution node for executing a current task in the task execution node list is down and that an instance of the current task executed on the down task execution node can be migrated, the number N2 of task execution lists included in the task execution node list shown in this embodiment needs to be greater than the maximum number N1 of instances required for executing the current task, so that even if any task execution node executing the current task in the task execution node list is down, the task execution node for executing the current task can quickly migrate the task to another task execution node in the task execution node list, and detailed description of the task migration process is omitted in this step.
The following describes in detail how task execution nodes are configured into the task execution node list:
And determining a target task execution node which is the task execution node to be configured in the task execution node list.
Optionally, the task-performing node may be determined by the task-distribution center;
specifically, the task distribution center may acquire the IP address of the server configured with the target task execution node, and send the acquired IP address of the server to the configuration center.
Alternatively, the task execution node may send request task configuration information to the task distribution center to request to become the target task execution node.
Specifically, the request task configuration information sent by the task execution node to the task distribution center may include an IP address of the task execution node, so that the task distribution center sends the acquired IP address of the server to the configuration center.
After the target task execution node sends the IP address to the task distribution center, the target task execution node can register with the task distribution center, and the task distribution center sends the IP address of the target task execution node which is successfully registered to the configuration center, so that the configuration center can create the task execution node list.
It should be clear that, in this embodiment, the description of the creation manner of the task execution node list is an optional example, and is not limited, and in a specific application, the creation may also be performed in other manners, for example, the developer may directly input the IP address of the task execution node to the configuration center, etc.
It should be further noted that, in this embodiment, the description of the content included in the configuration information is an optional example, and is not limited, as long as each node of the distributed timing system shown in this embodiment can implement execution of the method for distributed timing according to the configuration information, for example, the configuration information may further include task script information and the like.
Step 302, the configuration center sends the configuration information to the task distribution center.
And step 303, the task distribution center sends the configuration information to the task distributed lock service node.
In this embodiment, after receiving the configuration information, the task distribution center may send the received configuration information to the task distributed lock service node.
Step 304, the task distributed lock service node checks whether the configuration information stored in the task distributed lock service node is synchronous with the configuration information stored in the task distribution center, if not, step 305 is executed, and if yes, step 306 is executed.
In this embodiment, in the process of executing the flow shown in this embodiment, the task distributed lock service node needs to determine whether the configuration information stored in the task distributed lock service node itself is synchronous with the configuration information stored in the task distribution center;
if the configuration information stored in the task distributed lock service node itself is already synchronized with the configuration information stored in the task distribution center, step 306 shown in this embodiment is directly executed, and if the configuration information stored in the task distributed lock service node itself is not synchronized with the configuration information stored in the task distribution center, step 305 is executed.
Step 305, the task distributed lock service node updates the configuration information.
In this embodiment, when the task distributed lock service node determines that the configuration information stored in the task distributed lock service node is not synchronized with the configuration information stored in the task distribution center, the task distributed lock service node acquires the latest configuration information from the task distribution center, and then synchronizes the task distributed lock service node with the task distribution center.
Step 306, the task distributed lock service node configures task lock resources.
Step 306 shown in this embodiment is performed in the case where the configuration information stored by the task distributed lock service node is synchronized with the configuration information stored by the task distribution center.
In this embodiment, as can be seen from the above, the configuration information shown in this embodiment includes a specified number, where the specified number is used to indicate the number of instances in which the current task is executed.
For example, if the designated number included in the configuration information shown in the present embodiment is 10, it is explained that the number of instances for executing the current task is 10, that is, the current task is executed by 10 instances.
The task distributed lock service node shown in this embodiment may configure task lock resources according to the specified number.
Specifically, the task lock resource shown in this embodiment includes at least one task lock, and the number of task locks included in the task lock resource is equal to the number of instances.
The task lock shown in the embodiment ensures that one instance of the current task can be executed only at one task execution node, and the task lock shown in the embodiment ensures that one instance of the current task can be executed only at one task execution node at the same time.
For example, if the number of instances for executing the current task is 10, the number of task locks included in the task lock resource shown in this embodiment is 10.
It should be noted that the present embodiment is an optional example for explaining the number of instances for performing the current task, and is not limited thereto.
The task lock resource shown in this embodiment includes a task instance number atom, where the task instance number atom is used to count task locks included in the task lock resource.
In the initial state, that is, in the case that the task distributed lock service node does not send a task lock to the task execution node, the number of task instance number atoms is equal to the number of instances.
Step 307, the task distribution center sends the configuration information to each task execution node.
The configuration information shown in the embodiment includes current task execution time information, and specific description of the current task execution time information is shown in the above steps, which is not described in detail in this step.
Specifically, the task distribution center shown in this embodiment determines the task execution node list, and the task distribution center may send the configuration information to each task execution node on the task execution node list.
Step 308, the task execution node loads a timer.
Specifically, after the task execution nodes acquire the current task execution time information, the timer can be configured according to the current task execution time information.
The timer is used for timing the starting time of executing the current task.
Specifically, if the timer expires, it indicates that the current task executed by the task execution node expires, and the task execution node needs to execute the current task.
If the timer timing time is not up, the current task executed by the task executing node is not expired, and the task executing node does not need to execute the current task.
Step 309, the task execution node sends task lock request information to the task distributed lock service node.
In this embodiment, if the time counted by the timer has arrived, the task execution node may determine that the current task executed by the task execution node expires, and in this embodiment, when the task execution node determines that the current task needs to be executed, the task execution node in this embodiment does not directly execute the current task, but first executes step 309 in this embodiment.
The task lock request information is used for requesting the task distributed lock service node to send the task lock, and the task lock is used for indicating the task execution node to execute the current task.
It can be seen that, by adopting the method shown in this embodiment, when the task execution node determines that the timer expires, the task execution node needs to acquire a task lock from the task distributed lock service node, and if the task execution node does not acquire the task lock, the task execution node does not execute the current task, and only when the task execution node has received the task lock, the task execution node executes the current task.
Step 310, the task lock service node receives the task lock request information.
Step 311, the task distributed lock service node sends a task lock to a first target task execution node.
Specifically, the task distributed lock service node shown in this embodiment determines, according to the task lock request information, whether the number of task locks remaining in the task lock resource is greater than or equal to 1.
As can be seen from the above steps, the task instance number atom shown in this embodiment is used to count task locks, and detailed description is shown in the above steps, which is not repeated in this step.
The number counted by the task instance number atom illustrates the number of task execution nodes required to execute the current task.
After receiving the task lock request information, the task distributed lock service node in this embodiment first determines whether the number counted by the task instance number atom is greater than or equal to 1, if the number counted by the task instance number atom is greater than or equal to 1, it indicates that at least one task execution node is needed to execute the current task, and if the number counted by the task instance number atom is less than 1, it indicates that no task execution node is needed to execute the current task.
In this embodiment, if the number counted by the task instance number atom is greater than or equal to 1, that is, the number of task locks remaining in the task lock resource is greater than or equal to 1, the task distributed lock service node determines a first target task execution node.
The following details the specific process of determining the first target task execution node by the task distributed lock service node:
optionally, if the task distributed lock service node receives task lock request information sent by a task execution node, the task distributed lock service node may determine that the task execution node sending the task lock request information is the first target task execution node if the number counted by the task instance number atom is greater than or equal to 1.
Optionally, if the task distributed lock service node receives task lock request information sent by a plurality of task execution nodes, the task distributed lock service node may create an ordered list.
Wherein the ordered list includes a plurality of the task execution nodes that have sent the task lock request information.
Specifically, the ordering list is a list for ordering the plurality of task execution nodes according to the sequence of the received task lock request information.
More specifically, the task distributed lock service node determines the task execution node with the highest priority ordered in the ordered list as the first target task execution node.
Optionally, the task distributed lock service node may also determine that a random task execution node in the ordered list is the first target task execution node.
In this embodiment, the description of the specific manner of determining the first target task execution node is an optional example, and is not limited as long as the first target task execution node is a task execution node that sends the task lock request information.
After determining the first target task execution node, the task distributed lock service node can send any task lock of the task lock resources to the first target task execution node.
It should be clear that, in this embodiment, the description of ordering the ordered list based on the sequence of receiving the task lock request information is an optional example, without limitation, and in a specific application, the ordered list may also order the task execution nodes based on other ordered sequences, for example, based on the remaining memory capacity of the execution nodes, etc.
Step 312, the task-distributed-lock service node controls the number of task locks included in the task-lock resource to be reduced by one.
Specifically, the task distributed lock service node shown in this embodiment controls the number of task instance number atoms to be reduced by one.
It can be seen that, as long as the task distributed lock service node in this embodiment sends a task lock to the first target task execution node, the task distributed lock service node may control the number counted by the task instance number atom to be reduced by one.
After the task lock is sent by the task-distributing lock service node to the first target task execution node, there are two possible execution flows, one is shown in steps 313 to 315, and the other is shown in steps 316 to 317.
Step 313, the task distributed lock service node sends the task lock to the second target task execution node.
Specifically, after the task distributed lock service node sends the lock resource to the first target task execution node, the target duration can be timed.
More specifically, the task distributed lock service node may be configured with a timer for counting the target duration.
The target duration is the difference between the current time and a target starting time point, and the target starting time point is the time point when the task distributed lock service node sends the task lock.
The task distributed lock service node judges whether task lock release request information sent by the first target task execution node is received within the target time length, wherein the first target task execution node sends the task lock release request information to the task distributed lock service node after determining that the first target task execution node has completed the actual execution of the corresponding current task, so that the task distributed lock service node determines that the first target task execution node has completed the actual execution of the corresponding current task according to the task lock release request information.
In this step, if the task distributed lock service node determines that the preset condition is met, the second target task execution node is determined.
The preset condition is that the timer is longer than or equal to a preset time length when the target time length is longer than or equal to the preset time length, and task lock release request information sent by the first target task execution node is not received within the target time length.
Optionally, the developer shown in this embodiment may directly input the preset duration to the task distributed lock service node, or the developer may send the preset duration to the configuration center 101 or the task distribution center 103, which is not limited in this embodiment, so long as the task distributed lock service node can obtain the preset duration.
And under the condition that the preset condition is met, the first target task executing node does not execute for a long time to complete the current task, and the first target task executing node is in downtime, namely the first target task executing node cannot continue to execute the current task.
In the case that the task distributed lock service node in this embodiment determines that the preset condition is met, for normal execution of the current task, a second target task execution node is determined.
As can be seen from the foregoing, the task distributed lock service node and the task distribution center in this embodiment are synchronously configured with the configuration information, that is, the task distributed lock service node and the task distribution center are both configured with the task execution node list included in the configuration information.
When the task distributed lock service node determines that the first target task execution node in the task execution node list is down, the task distributed lock service node can determine a second target task execution node in the task execution node list.
In this embodiment, no limitation is made on how to determine the second target task execution node in the task execution node list, as long as the first target task execution node and the second target task execution node are different task execution nodes.
Optionally, the task distributed lock service node in this embodiment may determine the second target task node according to the ordered list, and specific description of the ordered list is shown in the above steps, which is not repeated in this step.
In the process of determining the first target task execution node, when the task distributed lock service node determines that the task execution node with the highest priority in the ordered list is the first target task execution node, the task distributed lock service node can delete the first target task execution node, and in this step, the task distributed lock service node can determine that the task execution node with the highest priority in the ordered list is the second target task execution node.
Specifically, the task distributed lock service node in this embodiment controls the number of task locks included in the task lock resource to be increased by one when determining that the preset condition is satisfied.
More specifically, the task distributed lock service node shown in this embodiment sends any one of the task locks included in the task lock resource to the second target task execution node.
Step 314, the task distributed lock service node controls the number of task locks included in the task lock resource to be reduced by one.
In this embodiment, after the task distributed lock service node sends the task to the second target task execution node, the number counted by the task instance number atom may be controlled to be reduced by one.
It can be seen that, as long as the task distributed lock service node in this embodiment sends a task lock to the second target task execution node, the task distributed lock service node may control the number counted by the task instance number atom to be reduced by one.
Step 315, the second target task node executes the current task.
In this embodiment, when the second task node receives the task lock, the second target task execution node may execute the current task according to the received configuration information.
Step 316, the first target task execution node sends task lock release request information to the task distributed lock service node.
In this embodiment, the first target task execution node may determine whether the current task is executed and completed when receiving the task lock.
Optionally, the first target task execution node may periodically determine whether the current task is executed and completed when the task lock is received.
And if the first target task execution node judges that the current task execution is completed, the first target task execution node can generate the task lock release request information.
The first target task execution node can send the task lock release request information to the task distributed lock service node, so that the task distributed lock service node determines that the first target task execution node executes the current task according to the task lock release request information.
The first target task execution node shown in this embodiment may send status information to the task distribution center and the monitoring center, where the status information is used to instruct the first target task execution node to execute the status of the current task, so that the monitoring center may perform a corresponding alarm and observe the execution situation of the task according to the status information reported by the first target task execution node.
Step 317, the task distributed lock service node controls the number of task locks included in the task lock resource to be increased by one.
Specifically, when the task distributed lock service node in this embodiment receives the task lock release request information, it may be determined that the first target task execution node has executed to complete the current task, and then the task distributed lock service node in this embodiment controls the number counted by the task instance number atom to be increased by one.
In this embodiment, when all task execution nodes that have the task of executing the current task send the task lock release request information to the task distributed lock service node, the number of task instance number atoms of the task distributed lock service node is equal to the number of instances, and specific description of the number of instances is shown in the above steps, which is not described in detail in this embodiment.
The application scenario of the method shown in this embodiment is not limited, for example, the method shown in this embodiment may be adapted to a complex service system, such as a task that needs a large number of batch processing tasks in the background and cannot be directly completed on a single server, where the tasks perform dependency, or where multiple tasks simultaneously occupy a timing task such as a message in a consumption message queue, where a service party can handle the large number of multiple instance tasks by using the method shown in this embodiment.
In the application scenario of the method in the embodiment, hundreds of operation with stable task types can be carried, wherein the operation includes business timing tasks related to data behind enterprise-to-site business and BOSS business, no matter how perfect the task transfer and monitoring of multiple instances are solved, the developer only needs to pay attention to the realization of business logic and the configuration of the number of instances, and does not need to pay attention to the monitoring and dynamic migration.
The method has the beneficial effects that:
the task execution nodes for executing the current task are not isolated from each other, the task distributed lock service node can manage each task execution node, the task distributed lock service node can enable the task lock to be generated to a first target task execution node which needs to execute the current task, the first target task execution node can execute the current task under the condition that the task lock is received, if the task distributed lock service node exceeds the preset time period, the task lock release request information sent by the first target task node is not received, the task distributed lock service node can directly determine that the first target task execution node is down, and the task distributed lock service node shown in the embodiment can enable the task on the first target task execution node to be migrated to a second target task execution node, so that the overall execution instance of the current task can be controlled, the efficiency of managing the task execution nodes is improved, the method shown in the embodiment can be configured according to the current task period, the number of instances can be adjusted normally, and the number of instances can be adjusted normally.
The following describes the specific flow of the method for executing the distributed timing by the task distributed lock service node in detail with reference to fig. 4:
step 401, the task distributed lock service node receives configuration information sent by a task distribution center.
The configuration information includes parameter information such as execution parameters of a current task and execution time information of the current task, and the configuration information further includes a task execution node list, where the task execution node list includes a plurality of task execution nodes for executing the current task, and the task execution node list shown in this embodiment establishes a correspondence between the current task and task execution nodes for executing the current task.
Step 402, the task distributed lock service node checks whether the configuration information stored in the task distributed lock service node is synchronous with the configuration information stored in the task distribution center, if not, step 403 is executed, and if yes, step 404 is executed.
In this embodiment, in the process of executing the flow shown in this embodiment, the task distributed lock service node needs to determine whether the configuration information stored in the task distributed lock service node itself is synchronous with the configuration information stored in the task distribution center;
If the configuration information stored in the task distributed lock service node itself is already synchronized with the configuration information stored in the task distribution center, step 404 shown in this embodiment is directly executed, and if the configuration information stored in the task distributed lock service node itself is not synchronized with the configuration information stored in the task distribution center, step 403 is executed.
Step 403, the task distributed lock service node updates the configuration information.
In this embodiment, when the task distributed lock service node determines that the configuration information stored in the task distributed lock service node is not synchronized with the configuration information stored in the task distribution center, the task distributed lock service node acquires the latest configuration information from the task distribution center, and then synchronizes the task distributed lock service node with the task distribution center.
Step 404, the task distributed lock service node configures task lock resources.
Step 404 shown in this embodiment is performed in the case where the configuration information stored by the task distributed lock service node is synchronized with the configuration information stored by the task distribution center.
In this embodiment, as can be seen from the above, the configuration information shown in this embodiment includes a specified number, where the specified number is used to indicate the number of instances in which the current task is executed.
Specifically, the task lock resource shown in this embodiment includes at least one task lock, and the number of task locks included in the task lock resource is equal to the number of instances.
The task lock shown in the embodiment ensures that one instance of the current task can be executed only at one task execution node, and the task lock shown in the embodiment ensures that one instance of the current task can be executed only at one task execution node at the same time.
The task lock resource shown in this embodiment includes a task instance number atom, where the task instance number atom is used to count task locks included in the task lock resource.
In the initial state, that is, in the case that the task distributed lock service node does not send a task lock to the task execution node, the number of task instance number atoms is equal to the number of instances.
Step 405, the task distributed lock service node receives task lock request information.
Specifically, after the task execution nodes acquire the current task execution time information, the timer can be configured according to the current task execution time information.
The timer is used for timing the starting time of executing the current task.
Specifically, if the timer expires, it indicates that the current task executed by the task execution node expires, and the task execution node needs to execute the current task.
If the timer timing time is not up, the current task executed by the task executing node is not expired, and the task executing node does not need to execute the current task.
In this embodiment, if the time counted by the timer has arrived, the task execution node may determine that the current task executed by the task execution node expires, and in this embodiment, when the task execution node determines that the current task needs to be executed, the task execution node in this embodiment does not directly execute the current task, but first sends the task lock request information to the task distributed lock service node.
The task lock request information is used for requesting the task distributed lock service node to send the task lock, and the task lock is used for indicating the task execution node to execute the current task.
It can be seen that, by adopting the method shown in this embodiment, when the task execution node determines that the timer expires, the task execution node needs to acquire a task lock from the task distributed lock service node, and if the task execution node does not acquire the task lock, the task execution node does not execute the current task, and only when the task execution node has received the task lock, the task execution node executes the current task.
Step 406, the task distributed lock service node determines whether the number of task locks remaining in the task lock resource is greater than or equal to 1, if yes, step 407 is executed, and if not, step 409 is executed.
Specifically, the task instance number atom shown in the embodiment is used for counting task locks, and specific description is shown in the above steps, which is not repeated in the present step.
The number counted by the task instance number atom illustrates the number of task execution nodes required to execute the current task.
After receiving the task lock request information, the task distributed lock service node in this embodiment first determines whether the number counted by the task instance number atom is greater than or equal to 1, if the number counted by the task instance number atom is greater than or equal to 1, it indicates that at least one task execution node is needed to execute the current task, and if the number counted by the task instance number atom is less than 1, it indicates that no task execution node is needed to execute the current task.
Step 407, the task distributed lock service node determines a first target task execution node.
The following details the specific process of determining the first target task execution node by the task distributed lock service node:
optionally, if the task distributed lock service node receives task lock request information sent by a task execution node, the task distributed lock service node may determine that the task execution node sending the task lock request information is the first target task execution node if the number counted by the task instance number atom is greater than or equal to 1.
Optionally, if the task distributed lock service node receives task lock request information sent by a plurality of task execution nodes, the task distributed lock service node may create an ordered list.
Wherein the ordered list includes a plurality of the task execution nodes that have sent the task lock request information.
Specifically, the ordering list is a list for ordering the plurality of task execution nodes according to the sequence of the received task lock request information.
More specifically, the task distributed lock service node determines the task execution node with the highest priority ordered in the ordered list as the first target task execution node.
Optionally, the task distributed lock service node may also determine that a random task execution node in the ordered list is the first target task execution node.
In this embodiment, the description of the specific manner of determining the first target task execution node is an optional example, and is not limited as long as the first target task execution node is a task execution node that sends the task lock request information.
Step 408, the task distributed lock service node sends a task lock to the first target task execution node.
The task distributed lock service node in this embodiment indicates, through the task lock, that the first target task execution node may execute the current task.
Step 409, the task distributed lock service node controls the number of task locks included in the task lock resource to be reduced by one.
Specifically, the task distributed lock service node shown in this embodiment controls the number of task instance number atoms to be reduced by one.
It can be seen that, as long as the task distributed lock service node in this embodiment sends a task lock to the first target task execution node, the task distributed lock service node may control the number counted by the task instance number atom to be reduced by one.
Step 410, the task distributed lock service node determines that task lock allocation fails.
Specifically, when the task distributed lock service node determines that the number of task locks remaining in the task lock resource is less than 1, it indicates that the task execution node for executing the current task is already allocated to be completed, and no new task execution node needs to be called to execute the current task, and the task distributed lock service node determines that task lock allocation fails.
Specifically, the task distributed lock service node may report the task lock allocation failure to the monitoring center.
Step 411, the task distributed lock service node determines whether a preset condition is satisfied, and if yes, step 412 is executed.
And after the task distributed lock service node sends the lock resource to the first target task execution node, the target duration can be timed.
More specifically, the task distributed lock service node may be configured with a timer for counting the target duration.
The target duration is the difference between the current time and a target starting time point, and the target starting time point is the time point when the task distributed lock service node sends the task lock.
The task distributed lock service node judges whether task lock release request information sent by the first target task execution node is received within the target time length, wherein the first target task execution node sends the task lock release request information to the task distributed lock service node after determining that the first target task execution node has completed the actual execution of the corresponding current task, so that the task distributed lock service node determines that the first target task execution node has completed the actual execution of the corresponding current task according to the task lock release request information.
The preset condition is that the timer is longer than or equal to a preset time length when the target time length is longer than or equal to the preset time length, and task lock release request information sent by the first target task execution node is not received within the target time length.
Optionally, the developer shown in this embodiment may directly input the preset duration to the task distributed lock service node, or the developer may send the preset duration to the configuration center 101 or the task distribution center 103, which is not limited in this embodiment, so long as the task distributed lock service node can obtain the preset duration.
And under the condition that the preset condition is met, the first target task executing node does not execute for a long time to complete the current task, and the first target task executing node is in downtime, namely the first target task executing node cannot continue to execute the current task.
Step 412, the task distributed lock service node determines the second target task execution node.
In the case that the task distributed lock service node in this embodiment determines that the preset condition is met, for normal execution of the current task, a second target task execution node is determined.
As can be seen from the foregoing, the task distributed lock service node and the task distribution center in this embodiment are synchronously configured with the configuration information, that is, the task distributed lock service node and the task distribution center are both configured with the task execution node list included in the configuration information.
When the task distributed lock service node determines that the first target task execution node in the task execution node list is down, the task distributed lock service node can determine a second target task execution node in the task execution node list.
In this embodiment, no limitation is made on how to determine the second target task execution node in the task execution node list, as long as the first target task execution node and the second target task execution node are different task execution nodes.
Optionally, the task distributed lock service node in this embodiment may determine the second target task node according to the ordered list, and specific description of the ordered list is shown in the above steps, which is not repeated in this step.
In the process of determining the first target task execution node, when the task distributed lock service node determines that the task execution node with the highest priority in the ordered list is the first target task execution node, the task distributed lock service node can delete the first target task execution node, and in this step, the task distributed lock service node can determine that the task execution node with the highest priority in the ordered list is the second target task execution node.
Step 413, the task distributed lock service node controls the number of the task locks included in the task lock resource to be increased by one.
Specifically, the task distributed lock service node in this embodiment controls the number of task locks included in the task lock resource to be increased by one when determining that the preset condition is satisfied.
Step 414, the task distributed lock service node sends a task lock to the second target task execution node.
The task distributed lock service node in this embodiment sends any one of the task locks included in the task lock resource to the second target task execution node.
Step 415, the task distributed lock service node controls the number of task locks included in the task lock resource to be reduced by one.
In this embodiment, after the task distributed lock service node sends the task to the second target task execution node, the number counted by the task instance number atom may be controlled to be reduced by one.
It can be seen that, as long as the task distributed lock service node in this embodiment sends a task lock to the second target task execution node, the task distributed lock service node may control the number counted by the task instance number atom to be reduced by one.
Step 416, the task lock service node receives task lock release request information sent by the first target task execution node.
In this embodiment, the first target task execution node may determine whether the current task is executed and completed when receiving the task lock.
Optionally, the first target task execution node may periodically determine whether the current task is executed and completed when the task lock is received.
And if the first target task execution node judges that the current task execution is completed, the first target task execution node can generate the task lock release request information.
The first target task execution node can send the task lock release request information to the task distributed lock service node, so that the task distributed lock service node determines that the first target task execution node executes the current task according to the task lock release request information.
The first target task execution node shown in this embodiment may send status information to the task distribution center and the monitoring center, where the status information is used to instruct the first target task execution node to execute the status of the current task, so that the monitoring center may perform a corresponding alarm and observe the execution situation of the task according to the status information reported by the first target task execution node.
Step 417, the task distributed lock service node controls the number of task locks included in the task lock resource to be increased by one.
Specifically, when the task distributed lock service node in this embodiment receives the task lock release request information, it may be determined that the first target task execution node has executed to complete the current task, and then the task distributed lock service node in this embodiment controls the number counted by the task instance number atom to be increased by one.
In this embodiment, when all task execution nodes that have the task of executing the current task send the task lock release request information to the task distributed lock service node, the number of task instance number atoms of the task distributed lock service node is equal to the number of instances, and specific description of the number of instances is shown in the above steps, which is not described in detail in this embodiment.
The method has the beneficial effects that:
the task execution nodes for executing the current task are not isolated from each other, the task distributed lock service node can manage each task execution node, the task distributed lock service node can enable the task lock to be generated to a first target task execution node which needs to execute the current task, the first target task execution node can execute the current task under the condition that the task lock is received, if the task distributed lock service node exceeds the preset time period, the task lock release request information sent by the first target task node is not received, the task distributed lock service node can directly determine that the first target task execution node is down, and the task distributed lock service node shown in the embodiment can enable the task on the first target task execution node to be migrated to a second target task execution node, so that the overall execution instance of the current task can be controlled, the efficiency of managing the task execution nodes is improved, the method shown in the embodiment can be configured according to the current task period, the number of instances can be adjusted normally, and the number of instances can be adjusted normally.
The following describes in detail a specific flow of the method for executing the distributed timing by the task execution node in conjunction with fig. 5:
step 501, the task execution node receives configuration information sent by the task distribution center.
And the configuration information comprises parameter information such as execution parameters of the current task, execution time information of the current task and the like.
Wherein the execution parameters of the current task are used to instruct the task execution node 105 how to execute the current task.
The current task execution time information is used to indicate the time of each task execution node 105 to execute the current task.
Step 502, a task execution node configures a timer.
Specifically, the task execution node shown in this embodiment configures a timer according to the current task execution time information, where the timer is used to time the start time of executing the current task.
Specifically, after the task execution nodes acquire the current task execution time information, the timer can be configured according to the current task execution time information.
The timer is used for timing the starting time of executing the current task.
Specifically, if the timer expires, it indicates that the current task executed by the task execution node expires, and the task execution node needs to execute the current task.
If the timer timing time is not up, the current task executed by the task executing node is not expired, and the task executing node does not need to execute the current task.
Step 503, the task execution node sends task lock request information to a task distributed lock service node.
And if the time counted by the timer is up, sending task lock request information to the task distributed lock service node, wherein the task lock request information is used for requesting the task distributed lock service node to send a task lock, and the task lock is used for indicating to execute the current task.
Step 504, the task execution node determines whether the task lock is successful, if so, step 505 is executed, and if not, step 502 is returned.
In this embodiment, if the task execution node can receive the task lock, the task lock request is indicated to be successful, and if the task execution node cannot receive the task lock, the task lock request is indicated to be failed.
Step 505, the task execution node executes the current task.
And if the task execution node receives the task lock, executing the current task according to the task lock.
The specific process of executing the current task by the task execution node is described in detail below.
Specifically, the task execution node shown in this embodiment may be a fork sub-process to execute the current task.
More specifically, the execution of the current task may be performed by a fork () function.
The fork () function is used to create a new process from an existing one, called a child process, and the original process called a parent process. The sub-process is used to execute the current task shown in this embodiment.
The child process obtained by using the fork () function is a replica of the parent process, and inherits the address space of the entire process from the parent process, including the process context, the code section, the process stack, the memory information, the opened file descriptor, the signal control settings, the process priority, the process group number, the current working directory, the root directory, the resource limitation and control terminal, etc., while the child process has only its process number, resource usage and timer, etc.
Step 506, the task execution node determines whether the current task is executed, and if yes, step 507 is executed.
And step 507, the task execution node sends task lock release request information to the task distributed lock service node.
Specifically, if the task execution node determines that the current task is executed, sending task lock release request information to the task distributed lock service node, where the task lock release request information is used to indicate that the current task is executed, so that the task distributed lock service node controls the number of task locks to be increased by one according to the task lock request information.
Step 508, the task execution node reports the status information.
Specifically, the task execution node shown in this embodiment reports the status information to the task distribution center and the monitoring center.
The status information shown in this embodiment may be used to indicate that the task execution node has successfully executed the current task.
The task execution nodes shown in the embodiment do not execute the current task when receiving the configuration information, but send the task lock request information to the task distributed lock service node first, and execute the current task only when receiving the task lock sent by the task distributed lock service node, so that the task distributed lock service node can control the global execution instance of the current task, and the efficiency of managing the task execution nodes is improved.
Based on the illustration in fig. 6, the embodiment of the present invention further provides a server, and in particular, fig. 6 provides a specific structure of a task distributed lock service node of the server illustrated in the present embodiment;
the method for executing the distributed timing by the task distributed lock service node of the server shown in the embodiment is shown in the above embodiment, and details of the method for executing the distributed timing by the task distributed lock service node of the server shown in the embodiment are not described in detail in the embodiment.
The server includes:
a third receiving unit 601, configured to receive a task execution node list sent by the task distribution center, where the task execution node list includes a plurality of task execution nodes for executing the current task, the number of task execution nodes included in the task execution node list is greater than the number of task locks included in the task lock resource, and the first target task execution node is located in the task execution node list.
A first receiving unit 602, configured to receive a specified number sent by a task distribution center, where the specified number is used to indicate a number of instances that perform the current task;
A first configuration unit 603, configured to configure a task lock resource according to the specified number, where the task lock resource includes at least one task lock, and the number of task locks included in the task lock resource is equal to the number of instances.
A first determining unit 604, configured to determine a first target task execution node, where the first target task execution node is a task execution node that has received a task lock, and the task lock is configured to instruct the first target task execution node to execute a current task;
optionally, the first determining unit 604 includes:
a first receiving module 6041, configured to receive task lock request information sent by a task execution node;
a first determining module 6042, configured to determine, according to the task lock request information, whether the number of task locks remaining in the task lock resource is greater than or equal to 1;
a second determining module 6043, configured to determine the first target task execution node if the first determining module determines that the number of task locks remaining in the task lock resource is greater than or equal to 1, where the first target task execution node is a task execution node that sends the task lock request information;
A sending module 6044, configured to send any task lock of the task lock resources to the first target task execution node;
optionally, the first determining unit 604 includes:
a second receiving module 6045, configured to receive task lock request information sent by each of the plurality of task execution nodes;
a creating module 6046, configured to create an ordered list, where the ordered list includes a plurality of task execution nodes, and the ordered list is a list that orders the plurality of task execution nodes according to a sequence of the received task lock request information;
a third determining module 6047 is configured to determine a task execution node with a highest priority ranked in the ranked list as the first target task execution node.
A second configuration unit 605 is configured to control the number of task locks included in the task lock resource to be reduced by one.
A second receiving unit 606, configured to receive the task lock release request information sent by the first target task execution node;
and a third configuration unit 607, configured to control, according to the task lock request information, one plus the number of task locks included in the task lock resource.
A second determining unit 608, configured to determine, when a preset condition is met, a second target task execution node, where the first target task execution node and the second target task execution node are different task execution nodes, the preset condition is that a target time length is greater than or equal to a preset time length, task lock release request information sent by the first target task execution node is not received within the target time length, the target time length is a difference between a current time and a target start time point, the target start time point is a time point when the task lock is sent, and the task lock release request information is used to indicate that the first target task execution node has executed to complete the current task;
The second determining unit 608 is further configured to determine, in the task execution node list, the second target task execution node, where the first target task execution node and the second target task execution node are located in the task execution node list;
a fourth configuration unit 609, configured to control, if the preset condition is met, the number of task locks included in the task lock resource to be increased by one;
a first sending unit 610, configured to send the task lock to the second target task execution node, where the task lock is used to instruct the second target task execution node to execute the current task.
A fifth configuration unit 611 is configured to control the number of task locks included in the task lock resource to be reduced by one.
The description of the beneficial effects of the method for executing the distributed timing by the task distributed lock service node of the server shown in the embodiment is shown in the above embodiment, and is not repeated in the embodiment.
Based on the illustration in fig. 7, the embodiment of the present invention further provides a server, specifically, a task execution node of the server, where the task execution node of the server is configured to execute a method for distributed timing, and a detailed description of the method for executing the distributed timing by the task execution node of the server shown in the embodiment is shown in the foregoing embodiment, and is not repeated in the embodiment.
The server includes:
a first receiving unit 701, configured to receive current task execution time information sent by a task distribution center;
a configuration unit 702, configured to configure a timer according to the current task execution time information, where the timer is configured to time a start time of executing the current task;
a first sending unit 703, configured to send task lock request information to a task distributed lock service node if the time counted by the timer has arrived, where the task lock request information is used to request the task distributed lock service node to send a task lock, and the task lock is used to instruct to execute a current task;
and the second receiving unit 704 is configured to execute the current task according to the task lock if the task lock is received.
A judging unit 705, configured to judge whether the current task is executed;
and the second sending unit 706 is configured to send task lock release request information to the task distributed lock service node if the judging unit judges that the current task is completed, where the task lock release request information is used to indicate that the current task is completed, so that the task distributed lock service node controls the number of task locks to be increased by one according to the task lock request information.
The description of the beneficial effects of the method for executing the distributed timing by the task execution node in this embodiment is shown in the above embodiment, and is not repeated in this embodiment.
Based on the server shown in fig. 2, one or more programs are stored in the memory 204, where the one or more programs include instructions that, when executed by the server, cause the server to perform the method for distributed timing as shown in the foregoing embodiment, and the detailed implementation procedure is shown in the foregoing embodiment, and is not repeated in this embodiment.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A method of distributed timing, comprising:
receiving a designated number sent by a task distribution center, wherein the designated number is used for indicating the number of instances for executing a current task;
configuring task lock resources according to the designated number, wherein the task lock resources comprise at least one task lock, and the number of the task locks included in the task lock resources is equal to the number of the instances;
receiving a task execution node list sent by the task distribution center; the task execution node list comprises a plurality of task execution nodes for executing the current task, the number of the task execution nodes included in the task execution node list is larger than the number of the task locks included in the task lock resources, and the number of the task execution lists included in the task execution node list is larger than the maximum number of instances required for executing the current task;
Determining a first target task execution node, wherein the first target task execution node is a task execution node which has received a task lock, the task lock is used for indicating the first target task execution node to execute a current task, and the first target task execution node is positioned in the task execution node list;
under the condition that a preset condition is met, determining a second target task execution node in the task execution node list; the first target task execution node and the second target task execution node are different task execution nodes, the preset condition is that the target time length is greater than or equal to the preset time length, task lock release request information sent by the first target task execution node is not received within the target time length, the target time length is the difference between the current time and a target starting time point, the target starting time point is the time point of sending the task lock, and the task lock release request information is used for indicating that the first target task execution node has executed to complete the current task; if the condition of the preset condition is met, indicating that the first target task execution node in the task execution node list is down;
And sending the task lock to the second target task execution node, wherein the task lock is used for indicating the second target task execution node to execute the current task.
2. The method of claim 1, wherein the determining a first target task execution node comprises:
receiving task lock request information sent by a task execution node;
determining whether the number of the residual task locks in the task lock resources is greater than or equal to 1 according to the task lock request information;
if the number of the task locks remaining in the task lock resources is greater than or equal to 1, determining the first target task execution node, wherein the first target task execution node is a task execution node for sending the task lock request information;
any task lock in the task lock resources is sent to the first target task execution node;
the method further comprises the steps of:
and controlling the number of the task locks included in the task lock resources to be reduced by one.
3. The method of claim 2, wherein the determining a first target task execution node comprises:
receiving task lock request information sent by a plurality of task execution nodes respectively;
Creating an ordered list, wherein the ordered list comprises a plurality of task execution nodes, and the ordered list is a list for ordering the plurality of task execution nodes according to the sequence of the received task lock request information;
and determining the task execution node with the highest priority in the ordered list as the first target task execution node.
4. The method according to claim 1, wherein the method further comprises:
receiving the task lock release request information sent by the first target task execution node;
and controlling the number of the task locks included in the task lock resources to be increased by one according to the task lock request information.
5. The method according to claim 1, wherein the method further comprises:
if the preset condition is met, controlling the number of the task locks included in the task lock resources to be increased by one;
the determining a second target task execution node includes:
determining the second target task execution node in the task execution node list, wherein the first target task execution node and the second target task execution node are positioned in the task execution node list;
After the task lock is sent to the second target task execution node, the method further includes:
and controlling the number of the task locks included in the task lock resources to be reduced by one.
6. A method of distributed timing, comprising:
receiving current task execution time information sent by a task distribution center;
configuring a timer according to the current task execution time information, wherein the timer is used for timing the starting time for executing the current task;
if the time counted by the timer is up, sending task lock request information to a task distributed lock service node, wherein the task lock request information is used for requesting the task distributed lock service node to send a task lock, and the task lock is used for indicating to execute a current task;
if the task lock is received, executing the current task according to the task lock;
judging whether the current task is executed or not;
if the current task is judged to be executed and completed, task lock release request information is sent to the task distributed lock service node, wherein the task lock release request information is used for indicating that the current task is executed and completed, so that the task distributed lock service node controls the number of task locks to be increased by one according to the task lock request information;
The task distributed lock service node and the task distribution center are both configured with a task execution node list, wherein the task execution node list comprises a plurality of task execution nodes for executing the current task, the number of the task execution nodes included in the task execution node list is greater than the number of the task locks included in task lock resources, and the number of the task execution lists included in the task execution node list is greater than the maximum number of instances required for executing the current task; the task lock resource includes at least one task lock, and the number of task locks included by the task lock resource is equal to the number of instances.
7. A server, comprising:
a first receiving unit configured to receive a specified number sent by a task distribution center, the specified number being used to indicate a number of instances in which a current task is executed;
a first configuration unit, configured to configure a task lock resource according to the specified number, where the task lock resource includes at least one task lock, and the number of task locks included in the task lock resource is equal to the number of instances;
The third receiving unit is used for receiving the task execution node list sent by the task distribution center; the task execution node list comprises a plurality of task execution nodes for executing the current task, the number of the task execution nodes included in the task execution node list is larger than the number of the task locks included in the task lock resources, and the number of the task execution lists included in the task execution node list is larger than the maximum number of instances required for executing the current task;
the first determining unit is used for determining a first target task executing node, wherein the first target task executing node is a task executing node which has received a task lock, the task lock is used for indicating the first target task executing node to execute a current task, and the first target task executing node is positioned in the task executing node list;
a second determining unit, configured to determine a second target task execution node in the task execution node list if a preset condition is satisfied; the first target task execution node and the second target task execution node are different task execution nodes, the preset condition is that the target time length is greater than or equal to the preset time length, task lock release request information sent by the first target task execution node is not received within the target time length, the target time length is the difference between the current time and a target starting time point, the target starting time point is the time point of sending the task lock, and the task lock release request information is used for indicating that the first target task execution node has executed to complete the current task; if the condition of the preset condition is met, indicating that the first target task execution node in the task execution node list is down;
The first sending unit is used for sending the task lock to the second target task execution node, and the task lock is used for indicating the second target task execution node to execute the current task.
8. A server, comprising:
the first receiving unit is used for receiving the current task execution time information sent by the task distribution center;
the configuration unit is used for configuring a timer according to the current task execution time information, and the timer is used for timing the starting time for executing the current task;
the first sending unit is used for sending task lock request information to the task distributed lock service node if the time counted by the timer is up, wherein the task lock request information is used for requesting the task distributed lock service node to send a task lock, and the task lock is used for indicating to execute a current task;
the second receiving unit is used for executing the current task according to the task lock if the task lock is received; the task distributed lock service node and the task distribution center are both configured with a task execution node list, wherein the task execution node list comprises a plurality of task execution nodes for executing the current task, the number of the task execution nodes included in the task execution node list is greater than the number of the task locks included in task lock resources, and the number of the task execution lists included in the task execution node list is greater than the maximum number of instances required for executing the current task; the task lock resource comprises at least one task lock, and the number of the task locks included by the task lock resource is equal to the number of the instances;
The judging unit is used for judging whether the current task is executed or not;
and the second sending unit is used for sending task lock release request information to the task distributed lock service node if the judging unit judges that the current task is executed and completed, wherein the task lock release request information is used for indicating that the current task is executed and completed, so that the task distributed lock service node controls the number of the task locks to be increased by one according to the task lock request information.
9. A distributed timing system, comprising: the system comprises a configuration center, a monitoring center, a task distribution center, a task distributed lock service node and a plurality of task execution nodes;
the configuration center is used for sending configuration information to the task distribution center, the configuration information comprises current task execution time information and a task execution node list, and the task execution node list comprises a plurality of task execution nodes for executing the current task;
the task distribution center is used for sending the configuration information to the task distributed lock service node and the plurality of task execution nodes;
the monitoring center is used for monitoring the task distributed lock service node and the plurality of task execution nodes;
The task distributed lock service node is configured to perform the method of distributed timing as set forth in any one of claims 1 to 5, and the task execution node is configured to perform the method of distributed timing as set forth in claim 6.
10. A server, comprising:
one or more processors, a memory, a bus system, and one or more programs, the processors and the memory being connected by the bus system;
wherein the one or more programs are stored in the memory, the one or more programs comprising instructions, which when executed by the server, cause the server to perform the method of any of claims 1-5.
11. A server, comprising:
one or more processors, a memory, a bus system, and one or more programs, the processors and the memory being connected by the bus system;
wherein the one or more programs are stored in the memory, the one or more programs comprising instructions, which when executed by the server, cause the server to perform the method of claim 6.
CN201710240842.8A 2017-04-13 2017-04-13 Distributed timing method, server and system Active CN108733459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710240842.8A CN108733459B (en) 2017-04-13 2017-04-13 Distributed timing method, server and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710240842.8A CN108733459B (en) 2017-04-13 2017-04-13 Distributed timing method, server and system

Publications (2)

Publication Number Publication Date
CN108733459A CN108733459A (en) 2018-11-02
CN108733459B true CN108733459B (en) 2023-07-14

Family

ID=63925057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710240842.8A Active CN108733459B (en) 2017-04-13 2017-04-13 Distributed timing method, server and system

Country Status (1)

Country Link
CN (1) CN108733459B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558234B (en) * 2018-11-30 2021-06-04 中国联合网络通信集团有限公司 Timed task scheduling method and device
CN110134505A (en) * 2019-05-15 2019-08-16 湖南麒麟信安科技有限公司 A kind of distributed computing method of group system, system and medium
CN111027196B (en) * 2019-12-03 2023-04-28 南方电网科学研究院有限责任公司 Simulation analysis task processing method and device for power equipment and storage medium
CN112148445A (en) * 2020-09-09 2020-12-29 倍智智能数据运营有限公司 Distributed task scheduling method based on big data technology
CN112414473A (en) * 2020-12-04 2021-02-26 合肥科博软件技术有限公司 Method and system for performing point inspection on equipment
CN112598529B (en) * 2020-12-15 2023-08-29 泰康保险集团股份有限公司 Data processing method and device, computer readable storage medium and electronic equipment
CN112596915A (en) * 2020-12-26 2021-04-02 中国农业银行股份有限公司 Distributed lock scheduling method, device, equipment and medium
CN114257591B (en) * 2021-12-16 2024-08-20 富盛科技股份有限公司 Weak-centralised distributed system networking method and system
CN114221863B (en) * 2022-02-22 2022-05-24 湖南云畅网络科技有限公司 Intelligent node election method for distributed cluster

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677468A (en) * 2016-01-06 2016-06-15 北京京东尚科信息技术有限公司 Cache and designing method thereof and scheduling method and scheduling device using cache
CN106126332A (en) * 2016-06-27 2016-11-16 北京京东尚科信息技术有限公司 Distributed timing task scheduling system and method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7050940B2 (en) * 2004-03-17 2006-05-23 International Business Machines Corporation Method and system for maintaining and examining timers for network connections
CN102521044B (en) * 2011-12-30 2013-12-25 北京拓明科技有限公司 Distributed task scheduling method and system based on messaging middleware
CN103761148B (en) * 2014-01-26 2017-04-05 北京京东尚科信息技术有限公司 The control method of cluster timer-triggered scheduler task
CN103744724A (en) * 2014-02-19 2014-04-23 互联网域名系统北京市工程研究中心有限公司 Timed task clustering method and device thereof
CN105100259B (en) * 2015-08-18 2018-02-16 北京京东尚科信息技术有限公司 A kind of distributed timing task executing method and system
CN105700937A (en) * 2016-01-04 2016-06-22 北京百度网讯科技有限公司 Multi-thread task processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677468A (en) * 2016-01-06 2016-06-15 北京京东尚科信息技术有限公司 Cache and designing method thereof and scheduling method and scheduling device using cache
CN106126332A (en) * 2016-06-27 2016-11-16 北京京东尚科信息技术有限公司 Distributed timing task scheduling system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
合作网站数据资源接入系统的设计与实现;陈士玉;《中国优秀硕士学位论文全文数据库信息科技辑》;I138-474 *

Also Published As

Publication number Publication date
CN108733459A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN108733459B (en) Distributed timing method, server and system
EP3966681B1 (en) Automated application updates during operating system upgrades
CN111757426B (en) Roaming network access method and device
JP6362761B2 (en) Roaming network access method and apparatus
US20150127755A1 (en) Method and apparatus for checking status of messages in electronic device
EP3125110A1 (en) Software upgrade method and terminal
EP3142304B1 (en) Synchronization method for notification message of electronic device, server and electronic device
US20210352059A1 (en) Message Display Method, Apparatus, and Device
CN109726067B (en) Process monitoring method and client device
WO2019037724A1 (en) Method for upgrading application of mobile terminal, storage device, and mobile terminal
CN108235754A (en) A kind of method and apparatus that user is prompted to update application version
CN103455343A (en) Method and device for updating application programs
CN111930565B (en) Process fault self-healing method, device and equipment for components in distributed management system
CN108234551B (en) Data processing method and device
EP2869604B1 (en) Method, apparatus and device for processing a mobile terminal resource
US11616860B2 (en) Information display method, terminal, and server
CN108139919B (en) External process user interface isolation and monitoring
KR20150099388A (en) Apparatus and method for transmitting message
KR20150124592A (en) Service device performing data synchronization between a plurality of user equipments, user equipments performing data synchronization, method for performing data synchronization between a plurality of user equipments and computer readable medium having computer program recorded therefor
CN114764388A (en) Label generation method, device and equipment for user interface control and storage medium
CN116033008A (en) Message processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TG01 Patent term adjustment