CN117539642B - Credit card distributed scheduling platform and scheduling method - Google Patents

Credit card distributed scheduling platform and scheduling method Download PDF

Info

Publication number
CN117539642B
CN117539642B CN202410027683.3A CN202410027683A CN117539642B CN 117539642 B CN117539642 B CN 117539642B CN 202410027683 A CN202410027683 A CN 202410027683A CN 117539642 B CN117539642 B CN 117539642B
Authority
CN
China
Prior art keywords
task
target
executor
scheduling
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410027683.3A
Other languages
Chinese (zh)
Other versions
CN117539642A (en
Inventor
王鹏
周成鹏
赵鑫
李辉辉
毛晓峰
崔广超
高振南
韦双双
赵怡彬
张俊阳
陈玉杰
王翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chenqin Information Technology Service Co ltd
Original Assignee
Shanghai Chenqin Information Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chenqin Information Technology Service Co ltd filed Critical Shanghai Chenqin Information Technology Service Co ltd
Priority to CN202410027683.3A priority Critical patent/CN117539642B/en
Publication of CN117539642A publication Critical patent/CN117539642A/en
Application granted granted Critical
Publication of CN117539642B publication Critical patent/CN117539642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a credit card distributed scheduling platform and a scheduling method. Relates to the technical field of credit card management. Determining a primary task to be executed through a main dispatching node; the primary task to be executed is sent to the target slave scheduling node to be executed; determining a current secondary task at least one secondary task to be executed by the target slave scheduling node; determining a target execution domain corresponding to the current secondary task through a target slave scheduling node, and determining a target master executor corresponding to the target execution domain; sending the current secondary task to the target main executor; determining a first target actuator by the target master actuator; and sending the current secondary task to the first target executor for execution. The high availability and decentralization design of the dispatching center can further ensure the stability and the safety of banking business.

Description

Credit card distributed scheduling platform and scheduling method
Technical Field
The application relates to the technical field of credit card management, in particular to a credit card distributed scheduling platform and a scheduling method.
Background
In enterprise production, massive business data needs to be processed, for example, banking businesses often need to perform batch operations, such as batch operations for daily final running of banks, annual final settlement, and the like. For massive business processing, the redundancy design provides a certain support for the stability of the system and the continuity of the business.
Existing scheduling system implementations typically include centralized scheduling or scheduling based on a ZooKeeper cluster implementation.
The centralized scheduling system is a large central system whose terminals are clients, all scheduling tasks being done by the large central system, the terminals being used for input and output only. However, the inventors found that, in the process of implementing the present invention, a disadvantage of centralized scheduling is that, on one hand, the scheduling information of the data processing tasks is summarized and collected on a management node, which is a central system, resulting in congestion of information flow; on the other hand, the failure of the management node can affect the normal operation of the whole dispatching system.
For the scheduling based on the ZooKeeper cluster, a distributed lock is generally required to be set for each scheduling task, the scheduling nodes in the ZooKeeper cluster need to compete for the distributed lock, and a scheduling center competing for the distributed lock is responsible for executing the corresponding scheduling task, which can be high availability of the scheduling nodes, but the processing performance of the scheduling task is limited by the distributed lock, so that the task processing efficiency is low.
Disclosure of Invention
An object of an embodiment of the present application is to provide a credit card distributed scheduling platform and a scheduling method, which are used for alleviating the situations of low availability and efficiency in the prior art.
In a first aspect, the present invention provides a scheduling method based on a credit card distributed scheduling platform, the credit card distributed scheduling platform comprising at least one scheduling center and at least one execution domain; each dispatching center comprises a plurality of dispatching nodes including a master dispatching node and a slave dispatching node, each execution domain comprises a plurality of executors, and each executor comprises a master executor and a slave executor; the method comprises the following steps:
determining a primary task to be executed through a primary scheduling node, wherein the primary task comprises at least one secondary task, the primary task is used for controlling the secondary tasks to be executed in sequence, at least one secondary task can comprise a common secondary task and a special secondary task, and the special secondary task comprises a plurality of secondary subtasks;
the master scheduling node makes a decision based on the information such as the task load condition of the current slave scheduling node, and determines a target slave scheduling node; the primary task to be executed is sent to the target slave scheduling node to be executed;
receiving the primary task to be executed from a scheduling node through the target, and determining a current secondary task in at least one secondary task to be executed; the scheduling node maintains a first corresponding relation between the secondary task and the execution domain and a second corresponding relation between the execution domain and the main executor;
determining a target execution domain corresponding to the current secondary task based on the first corresponding relation by a target slave scheduling node, and determining a target master executor corresponding to the target execution domain based on a second corresponding relation; sending the current secondary task to the target main executor;
receiving the current secondary task through the target main executor, and making a decision based on information such as task load conditions of the executors in the current execution domain to determine a first target executor; and sending the current secondary task to the first target executor for execution.
When the current secondary task is a special secondary task, determining a target secondary sub-task through the first target executor at a plurality of secondary sub-tasks corresponding to the current secondary task, and determining a second target executor corresponding to the target secondary sub-task based on the first corresponding relation;
and executing the target secondary subtask through the second target executor, and updating the execution condition of the task in an execution domain database.
In an alternative embodiment, the method further comprises:
when the current secondary task is a common secondary task, the current secondary task is directly executed through the first target executor, and the execution condition of the task is updated in an execution domain database.
In an alternative embodiment, the method further comprises:
and distributing the secondary tasks and maintaining the task execution state based on the primary tasks to be executed by the target slave scheduling node until the primary tasks to be executed are executed.
In an alternative embodiment, the method further comprises:
and periodically reading the task execution condition of the execution domain database by the target master executor, and sending the task execution condition to a corresponding slave scheduling node.
In an alternative embodiment, the method further comprises:
the dispatching center executes a shift algorithm, and selects one master dispatching node and one or more slave dispatching nodes from a plurality of dispatching nodes; the execution domain executes a raft algorithm, and selects one master executor and one or more slave executors from a plurality of executors; the main executor registers domain information of the corresponding execution domain to the dispatching center, wherein the domain information comprises information of the main executor.
In an alternative embodiment, each of the scheduling centers includes an odd number of scheduling nodes; each execution domain comprises an odd number of actuators.
In an alternative embodiment, the dispatch center performs a raft algorithm when the system is started and the execution domain performs a raft algorithm; or when a preset trigger time is reached, the dispatching center executes a shift algorithm or the execution domain executes the shift algorithm; the triggering time comprises periodic triggering, random triggering or abnormal triggering through the node.
In a second aspect, the present invention provides a credit card distributed scheduling platform comprising at least one scheduling center and at least one execution domain; each dispatching center comprises a plurality of dispatching nodes including a master dispatching node and a slave dispatching node, each execution domain comprises a plurality of executors, and each executor comprises a master executor and a slave executor;
the main dispatching node is used for determining a primary task to be executed, the primary task comprises at least one secondary task, the primary task is used for controlling the secondary tasks to be executed in sequence, at least one secondary task can comprise a common secondary task and a special secondary task, and the special secondary task comprises a plurality of secondary subtasks; based on the information such as the task load condition of the current slave scheduling node, making a decision, and determining a target slave scheduling node; the primary task to be executed is sent to the target slave scheduling node to be executed;
the target slave scheduling node is used for determining a current secondary task in at least one secondary task to be executed by receiving the primary task to be executed; the scheduling node maintains a first corresponding relation between the secondary task and the execution domain and a second corresponding relation between the execution domain and the main executor; determining a target execution domain corresponding to the current secondary task based on the first corresponding relation, and determining a target main executor corresponding to the target execution domain based on a second corresponding relation; sending the current secondary task to the target main executor;
the target main executor is used for receiving the current secondary task, making a decision based on information such as task load conditions of the executors in the current execution domain, and determining a first target executor; and sending the current secondary task to the first target executor for execution.
The first target executor is configured to determine, when the current secondary task is a special secondary task, a target secondary sub-task in a plurality of secondary sub-tasks corresponding to the current secondary task, and determine, based on the first correspondence, a second target executor corresponding to the target secondary sub-task;
the second target executor is used for executing the target secondary subtask and updating the execution condition of the task in an execution domain database.
In an alternative embodiment, the method further comprises:
and the first target executor is used for directly executing the current secondary task when the current secondary task is a common secondary task, and updating the execution condition of the task in an execution domain database.
In an alternative embodiment, the method further comprises:
and the target slave scheduling node is used for distributing the secondary task based on the primary task to be executed and maintaining the task execution state until the primary task to be executed is executed.
In an alternative embodiment, the method further comprises:
the target master executor is used for periodically reading the task execution condition of the execution domain database and sending the task execution condition to the corresponding slave scheduling node.
In an alternative embodiment, the method further comprises:
the dispatching center is used for executing a shift algorithm, selecting one master dispatching node from a plurality of dispatching nodes and one or more slave dispatching nodes; the execution domain executes a raft algorithm, and selects one master executor and one or more slave executors from a plurality of executors; the main executor registers domain information of the corresponding execution domain to the dispatching center, wherein the domain information comprises information of the main executor.
In an alternative embodiment, each of the scheduling centers includes an odd number of scheduling nodes; each execution domain comprises an odd number of actuators.
In an alternative embodiment, the dispatch center performs a raft algorithm when the system is started and the execution domain performs a raft algorithm; or when a preset trigger time is reached, the dispatching center executes a shift algorithm or the execution domain executes the shift algorithm; the triggering time comprises periodic triggering, random triggering or abnormal triggering through the node.
In a third aspect, the present invention provides a server comprising a memory, a processor, the memory storing a computer program executable on the processor, the processor implementing the method according to any of the preceding embodiments when executing the computer program.
In a fourth aspect, the present invention provides a computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any of the preceding embodiments.
The invention provides a credit card distributed scheduling platform and a scheduling method. Determining a primary task to be executed through a primary scheduling node, wherein the primary task comprises at least one secondary task, the primary task is used for controlling the secondary tasks to be executed in sequence, at least one secondary task can comprise a common secondary task and a special secondary task, and the special secondary task comprises a plurality of secondary subtasks; the master scheduling node makes a decision based on the information such as the task load condition of the current slave scheduling node, and determines a target slave scheduling node; the primary task to be executed is sent to the target slave scheduling node to be executed, so that the target slave scheduling node is facilitated; receiving the primary task to be executed from a scheduling node through the target, and determining a current secondary task in at least one secondary task to be executed; the scheduling node maintains a first corresponding relation between the secondary task and the execution domain and a second corresponding relation between the execution domain and the main executor; determining a target execution domain corresponding to the current secondary task based on the first corresponding relation by a target slave scheduling node, and determining a target master executor corresponding to the target execution domain based on a second corresponding relation; sending the current secondary task to the target main executor; receiving the current secondary task through the target main executor, and making a decision based on information such as task load conditions of the executors in the current execution domain to determine a first target executor; and sending the current secondary task to the first target executor for execution. The high availability and decentralization design of the dispatching center can further ensure the stability and the safety of banking business. Meanwhile, the high availability of the dispatching center and the executor is realized, and the throughput of dispatching task execution is greatly improved through the design of locking-free.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic structural diagram of a credit card distributed scheduling platform according to an embodiment of the present application;
FIG. 2 is a schematic diagram of another credit card distributed scheduling platform according to an embodiment of the present application;
FIG. 3 is a schematic diagram of another credit card distributed scheduling platform according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a credit card distributed scheduling platform according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a scheduling method based on a credit card distributed scheduling platform according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
As shown in fig. 1, a distributed scheduling platform applicable to the present application includes at least one scheduling center and at least one execution domain; wherein each of the scheduling centers includes a plurality of scheduling nodes, optionally, the number of scheduling nodes included in each of the scheduling centers is an odd number; as shown in fig. 3, each of the execution domains includes a plurality of actuators, and optionally, each of the execution domains includes an odd number of actuators.
When the system is started or reaches a preset trigger time, a dispatching node in the dispatching center can execute a shift algorithm, and a master dispatching node and one or more slave dispatching nodes are selected; the executors in the execution domain can also execute a shift algorithm, and select a master executor and one or more slave executors; the master executor registers domain information of the corresponding execution domain to the dispatching center, wherein the domain information comprises information of the master executor;
the preset triggering time comprises the steps of determining based on a preset period or determining randomly; or when the slave node finds that the master node is abnormal, the abnormal node is removed from the cluster, a new master node and one or more slave nodes are selected by re-executing the shift algorithm based on the new cluster, and the new master node performs scheduling optimization based on the task to be currently executed and the task in execution.
For example, as in fig. 2, a dispatch center may include three of dispatch node a, dispatch node B, and dispatch node C. When the system is started or a preset trigger time is reached, the scheduling node A, the scheduling node B and the scheduling node C execute a shift algorithm, and finally, the scheduling node B and the scheduling node C throw the scheduling node A after one round or multiple rounds of voting, the scheduling node A gets the vote to meet the condition and becomes a master scheduling node, and the scheduling node B and the scheduling node C become slave scheduling nodes.
As another example, as in fig. 3, an execution domain may include three of an actuator a, an actuator B, and an actuator C. When the system is started or a preset trigger time is reached, the actuator A, the actuator B and the actuator C execute a shift algorithm, and finally, the actuator B and the actuator C cast the actuator A after one or more rounds of voting, the actuator A gets the ticket to meet the condition, and the master actuator is formed, and the actuator B and the actuator C become slave actuators.
The overall structure may be as shown in fig. 4. The credit card distributed scheduling platform may include at least one scheduling center and at least one execution domain; each dispatching center comprises a plurality of dispatching nodes including a master dispatching node and a slave dispatching node, each execution domain comprises a plurality of executors, and each executor comprises a master executor and a slave executor. The number of the scheduling nodes included in each scheduling center is an odd number; each execution domain comprises an odd number of actuators.
Specifically, the primary scheduling node is configured to determine a primary task to be executed, where the primary task includes at least one secondary task, where the primary task is configured to control the secondary task to execute sequentially, and at least one secondary task may include a normal secondary task and a special secondary task, and the special secondary task includes a plurality of secondary sub-tasks; based on the information such as the task load condition of the current slave scheduling node, making a decision, and determining a target slave scheduling node; and sending the primary task to be executed to the target slave scheduling node for execution so as to facilitate the target slave scheduling node. The decision is made based on the information such as the task load condition of the current slave scheduling node, the target slave scheduling node is determined, and the slave scheduling node with low current load can be selected as the target slave scheduling node. The master scheduling node needs to maintain the load condition of each slave scheduling node in real time, and can maintain the load condition through a database, and the master scheduling node can acquire the current load condition of each slave scheduling node from the database. Each slave scheduling node may also periodically synchronize the current load condition to the database.
The target slave scheduling node is used for determining a current secondary task in at least one secondary task to be executed by receiving the primary task to be executed; the scheduling node maintains a first corresponding relation between the secondary task and the execution domain and a second corresponding relation between the execution domain and the main executor; determining a target execution domain corresponding to the current secondary task based on the first corresponding relation, and determining a target main executor corresponding to the target execution domain based on a second corresponding relation; sending the current secondary task to the target main executor;
the target main executor is used for receiving the current secondary task, making a decision based on information such as task load conditions of the executors in the current execution domain, and determining a first target executor; and sending the current secondary task to the first target executor for execution. The master executor needs to maintain the load condition of each slave executor in real time, and can maintain the load condition through a database, and the master executor can acquire the current load condition of each slave executor from the database. Each slave actuator may also periodically synchronize the current load conditions to the database.
The first target executor is configured to determine, when the current secondary task is a special secondary task, a target secondary sub-task in a plurality of secondary sub-tasks corresponding to the current secondary task, and determine, based on the first correspondence, a second target executor corresponding to the target secondary sub-task;
the second target executor is used for executing the target secondary subtask and updating the execution condition of the task in an execution domain database.
And the first target executor is used for directly executing the current secondary task when the current secondary task is a common secondary task, and updating the execution condition of the task in an execution domain database.
In some embodiments, the target slave scheduling node is further configured to distribute the secondary task and maintain a task execution state based on the primary task to be executed until the primary task to be executed is executed.
The target master executor is used for periodically reading the task execution condition of the execution domain database and sending the task execution condition to the corresponding slave scheduling node.
In some embodiments, the scheduling center is configured to execute a shift algorithm, select one master scheduling node from a plurality of scheduling nodes, and one or more slave scheduling nodes; the execution domain executes a raft algorithm, and selects one master executor and one or more slave executors from a plurality of executors; the main executor registers domain information of the corresponding execution domain to the dispatching center, wherein the domain information comprises information of the main executor.
Wherein the dispatch center may execute a raft algorithm when the system is started and the execution domain may execute a raft algorithm; or when a preset trigger time is reached, the dispatching center executes a shift algorithm or the execution domain executes the shift algorithm; the triggering time comprises periodic triggering, random triggering or abnormal triggering through the node.
Based on the method, the design of high availability and decentralization of the dispatching center can be performed, and the stability and the safety of banking business can be further ensured. Meanwhile, the high availability of the dispatching center and the executor is realized, and the throughput of dispatching task execution is greatly improved through the design of locking-free.
Fig. 5 is a schematic flow chart of a scheduling method based on a credit card distributed scheduling platform according to an embodiment of the present application. The method is applicable to any of the credit card distributed scheduling platforms shown in fig. 1-4. The credit card distributed scheduling platform comprises at least one scheduling center and at least one execution domain; each dispatching center comprises a plurality of dispatching nodes including a master dispatching node and a slave dispatching node, each execution domain comprises a plurality of executors, and each executor comprises a master executor and a slave executor; as shown in fig. 5, the method includes:
s510, determining a primary task to be executed through a main dispatching node, wherein the primary task comprises at least one secondary task, the primary task is used for controlling the secondary tasks to be executed in sequence, at least one secondary task can comprise a common secondary task and a special secondary task, and the special secondary task comprises a plurality of secondary subtasks;
s520, deciding through the master scheduling node based on the task load condition of the current slave scheduling node, and determining a target slave scheduling node; the primary task to be executed is sent to the target slave scheduling node to be executed;
s530, receiving the primary task to be executed from a scheduling node through the target, and determining a current secondary task in at least one secondary task to be executed; the scheduling node maintains a first corresponding relation between the secondary task and the execution domain and a second corresponding relation between the execution domain and the main executor;
s540, determining a target execution domain corresponding to the current secondary task through a target slave scheduling node based on the first corresponding relation, and determining a target master executor corresponding to the target execution domain based on a second corresponding relation; sending the current secondary task to the target main executor;
s550, receiving the current secondary task through the target main executor, making a decision based on the task load condition of the executor in the current execution domain, and determining a first target executor; and sending the current secondary task to the first target executor for execution.
S560, when the current secondary task is a special secondary task, determining a target secondary sub-task through the first target executor at a plurality of secondary sub-tasks corresponding to the current secondary task, and determining a second target executor corresponding to the target secondary sub-task based on the first corresponding relation;
s570, executing the target secondary subtasks through the second target executor, and updating the execution condition of the tasks in an execution domain database.
In some embodiments, further comprising:
when the current secondary task is a common secondary task, the current secondary task is directly executed through the first target executor, and the execution condition of the task is updated in an execution domain database.
In some embodiments, further comprising:
and distributing the secondary tasks and maintaining the task execution state based on the primary tasks to be executed by the target slave scheduling node until the primary tasks to be executed are executed.
In some embodiments, further comprising:
and periodically reading the task execution condition of the execution domain database by the target master executor, and sending the task execution condition to a corresponding slave scheduling node.
In some embodiments, further comprising:
the dispatching center executes a shift algorithm, and selects one master dispatching node and one or more slave dispatching nodes from a plurality of dispatching nodes; the execution domain executes a raft algorithm, and selects one master executor and one or more slave executors from a plurality of executors; the main executor registers domain information of the corresponding execution domain to the dispatching center, wherein the domain information comprises information of the main executor.
In some embodiments, each of the scheduling centers includes an odd number of scheduling nodes; each execution domain comprises an odd number of actuators.
In some embodiments, the dispatch center performs a raft algorithm and the execution domain performs a raft algorithm when the system is started; or when a preset trigger time is reached, the dispatching center executes a shift algorithm or the execution domain executes the shift algorithm; the triggering time comprises periodic triggering, random triggering or abnormal triggering through the node.
Referring to fig. 6, a server 600 provided in the embodiment of the present application at least includes: the road surface disease detection method provided by the embodiment of the application is realized when the processor 601 executes the computer program.
The server 600 provided by the embodiments of the present application may also include a bus 603 that connects the different components (including the processor 601 and the memory 602). Where bus 603 represents one or more of several types of bus structures, including a memory bus, a peripheral bus, a local bus, and so forth.
The Memory 602 may include readable storage media in the form of volatile Memory, such as random access Memory (Random Access Memory, RAM) 6021 and/or cache Memory 6022, and may further include Read Only Memory (ROM) 6023. Memory 602 may also include a program tool 6025 having a set (at least one) of program modules 6024, program modules 6024 including, but not limited to, an operating subsystem, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 601 may be a processing element or a collective term for a plurality of processing elements, for example, the processor 601 may be a central processing unit (Central Processing Unit, CPU) or one or more integrated circuits configured to implement the pavement defect detection method provided in the embodiments of the present application. In particular, the processor 601 may be a general purpose processor including, but not limited to, a CPU, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The server 600 may be in communication with one or more external devices 604 (e.g., keyboard, remote control, etc.), one or more devices that enable a user to interact with the server 600 (e.g., cell phone, computer, etc.), and/or any device that enables the server 600 to communicate with one or more other servers 600 (e.g., router, modem, etc.). Such communication may occur through an Input/Output (I/O) interface 605. Also, the server 600 may communicate with one or more networks (e.g., local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN) and/or public network, such as the internet) via the network adapter 606. As shown in fig. 6, the network adapter 606 communicates with the other modules of the server 600 via the bus 603. It should be appreciated that although not shown in fig. 6, other hardware and/or software modules may be used in connection with server 600, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, disk array (Redundant Arrays of Independent Disks, RAID) subsystems, tape drives, data backup storage subsystems, and the like.
It should be noted that the server 600 shown in fig. 6 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
The following describes a computer-readable storage medium provided in an embodiment of the present application. The computer readable storage medium provided in the embodiment of the present application stores computer instructions that, when executed by a processor, implement the pavement disease detection method provided in the embodiment of the present application. Specifically, the computer instructions may be built-in or installed in the processor, so that the processor may implement the pavement disease detection method provided in the embodiments of the present application by executing the built-in or installed computer instructions.
In addition, the pavement disease detection method provided by the embodiment of the present application may also be implemented as a computer program product, where the computer program product includes program code, and the program code implements the pavement disease detection method provided by the embodiment of the present application when running on a processor.
The computer program product provided by the embodiments of the present application may employ one or more computer-readable storage media, which may be, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, and more specific examples (a non-exhaustive list) of the computer-readable storage media include: an electrical connection having one or more wires, a portable disk, a hard disk, a RAM, a ROM, an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), an optical fiber, a portable compact disk read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof.
The computer program product provided by the embodiments of the present application may be a CD-ROM and include program code, and may also be run on a server, such as a road management device. However, the computer program product provided by the embodiments of the present application is not limited thereto, and the computer readable storage medium may be any tangible medium that can contain, or store the program code for use by or in connection with the instruction execution system, apparatus, or device.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to encompass such modifications and variations.

Claims (10)

1. The credit card distributed scheduling platform based scheduling method is characterized in that the credit card distributed scheduling platform comprises at least one scheduling center and at least one execution domain; each dispatching center comprises a plurality of dispatching nodes, each dispatching node comprises a master dispatching node and a slave dispatching node, each execution domain comprises a plurality of executors, and each executor comprises a master executor and a slave executor; the method comprises the following steps:
determining a primary task to be executed through a main dispatching node, wherein the primary task comprises at least one secondary task, the primary task is used for controlling the secondary tasks to be executed in sequence, the at least one secondary task comprises a common secondary task and a special secondary task, and the special secondary task comprises a plurality of secondary sub-tasks;
the master scheduling node makes a decision based on the task load condition of the current slave scheduling node, and determines a target slave scheduling node; the primary task to be executed is sent to the target slave scheduling node to be executed;
receiving the primary task to be executed from a scheduling node through the target, and determining a current secondary task in at least one secondary task to be executed; the scheduling node maintains a first corresponding relation between the secondary task and the execution domain and a second corresponding relation between the execution domain and the main executor;
the target slave scheduling node determines a target execution domain corresponding to the current secondary task based on the first corresponding relation, and determines a target master executor corresponding to the target execution domain based on the second corresponding relation; sending the current secondary task to the target main executor;
receiving the current secondary task through the target main executor, and making a decision based on the task load condition of the executor in the current execution domain to determine a first target executor; the current secondary task is sent to the first target executor to be executed;
when the current secondary task is a special secondary task, determining a target secondary sub-task in a plurality of secondary sub-tasks corresponding to the current secondary task through the first target executor, and determining a second target executor corresponding to the target secondary sub-task based on the first corresponding relation;
and executing the target secondary subtask through the second target executor, and updating the execution condition of the task in an execution domain database.
2. The method as recited in claim 1, further comprising:
when the current secondary task is a common secondary task, the current secondary task is directly executed through the first target executor, and the execution condition of the task is updated in an execution domain database.
3. The method as recited in claim 1, further comprising:
and distributing the secondary tasks and maintaining the task execution state based on the primary tasks to be executed by the target slave scheduling node until the primary tasks to be executed are executed.
4. A method according to claim 3, further comprising:
and periodically reading the task execution condition of the execution domain database by the target master executor, and sending the task execution condition to a corresponding slave scheduling node.
5. The method of any one of claims 1-4, further comprising:
the dispatching center executes a shift algorithm, and selects one master dispatching node and one or more slave dispatching nodes from a plurality of dispatching nodes; the execution domain executes a raft algorithm, and selects one master executor and one or more slave executors from a plurality of executors; the main executor registers domain information of the corresponding execution domain to the dispatching center, wherein the domain information comprises information of the main executor.
6. The method of any of claims 1-4, wherein each of the scheduling centers includes an odd number of scheduling nodes; each execution domain comprises an odd number of actuators.
7. The method of claim 5, wherein the dispatch center performs a raft algorithm and the execution domain performs a raft algorithm when a system is started;
or,
when a preset trigger time is reached, the dispatching center executes a raft algorithm or the execution domain executes the raft algorithm; the triggering time comprises periodic triggering, random triggering or abnormal triggering through the node.
8. A credit card distributed scheduling platform, wherein the credit card distributed scheduling platform comprises at least one scheduling center and at least one execution domain; each dispatching center comprises a plurality of dispatching nodes, each dispatching node comprises a master dispatching node and a slave dispatching node, each execution domain comprises a plurality of executors, and each executor comprises a master executor and a slave executor;
the main dispatching node is used for determining a primary task to be executed, the primary task comprises at least one secondary task, the primary task is used for controlling the secondary tasks to be executed in sequence, at least one secondary task comprises a common secondary task and a special secondary task, and the special secondary task comprises a plurality of secondary subtasks; based on the task load condition of the current slave scheduling node, making a decision, and determining a target slave scheduling node; the primary task to be executed is sent to the target slave scheduling node to be executed;
the target slave scheduling node is used for determining a current secondary task in at least one secondary task to be executed by receiving the primary task to be executed; the scheduling node maintains a first corresponding relation between the secondary task and the execution domain and a second corresponding relation between the execution domain and the main executor; determining a target execution domain corresponding to the current secondary task based on the first corresponding relation, and determining a target main executor corresponding to the target execution domain based on a second corresponding relation; sending the current secondary task to the target main executor;
the target main executor is used for receiving the current secondary task, making a decision based on the task load condition of the executor in the current execution domain, and determining a first target executor; the current secondary task is sent to the first target executor to be executed;
the first target executor is configured to determine, when the current secondary task is a special secondary task, a target secondary subtask among a plurality of secondary subtasks corresponding to the current secondary task, and determine, based on the first correspondence, a second target executor corresponding to the target secondary subtask;
the second target executor is used for executing the target secondary subtask and updating the execution condition of the task in an execution domain database.
9. A server comprising a memory, a processor, the memory having stored therein a computer program executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any one of claims 1 to 7.
CN202410027683.3A 2024-01-09 2024-01-09 Credit card distributed scheduling platform and scheduling method Active CN117539642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410027683.3A CN117539642B (en) 2024-01-09 2024-01-09 Credit card distributed scheduling platform and scheduling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410027683.3A CN117539642B (en) 2024-01-09 2024-01-09 Credit card distributed scheduling platform and scheduling method

Publications (2)

Publication Number Publication Date
CN117539642A CN117539642A (en) 2024-02-09
CN117539642B true CN117539642B (en) 2024-04-02

Family

ID=89792274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410027683.3A Active CN117539642B (en) 2024-01-09 2024-01-09 Credit card distributed scheduling platform and scheduling method

Country Status (1)

Country Link
CN (1) CN117539642B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798499A (en) * 2018-08-03 2020-02-14 高新兴科技集团股份有限公司 Distributed service coordination system and method
CN110955506A (en) * 2019-11-26 2020-04-03 浙江电子口岸有限公司 Distributed job scheduling processing method
CN111367642A (en) * 2020-03-09 2020-07-03 中国铁塔股份有限公司 Task scheduling execution method and device
WO2020140683A1 (en) * 2019-01-04 2020-07-09 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, computer device, and storage medium
CN112860393A (en) * 2021-01-20 2021-05-28 北京科技大学 Distributed task scheduling method and system
CN113032125A (en) * 2021-04-02 2021-06-25 京东数字科技控股股份有限公司 Job scheduling method, device, computer system and computer-readable storage medium
CN114327843A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Task scheduling method and device
CN115129466A (en) * 2022-03-31 2022-09-30 西安电子科技大学 Cloud computing resource hierarchical scheduling method, system, device and medium
WO2022206426A1 (en) * 2021-03-30 2022-10-06 华为云计算技术有限公司 Distributed transaction processing method and system, and related device
CN115840631A (en) * 2023-01-04 2023-03-24 中科金瑞(北京)大数据科技有限公司 RAFT-based high-availability distributed task scheduling method and equipment
CN115951986A (en) * 2023-02-17 2023-04-11 未来电视有限公司 Task scheduling method, node equipment and remote server
CN116055563A (en) * 2022-11-22 2023-05-02 北京明朝万达科技股份有限公司 Task scheduling method, system, electronic equipment and medium based on Raft protocol
CN116149827A (en) * 2023-04-04 2023-05-23 云粒智慧科技有限公司 Distributed task scheduling system and distributed task scheduling execution system
CN116684418A (en) * 2023-08-03 2023-09-01 北京神州泰岳软件股份有限公司 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
WO2023165105A1 (en) * 2022-03-04 2023-09-07 深圳海星智驾科技有限公司 Load balancing control method and apparatus, electronic device, storage medium, and computer program
CN116719623A (en) * 2023-06-08 2023-09-08 中国工商银行股份有限公司 Job scheduling method, job result processing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012173536A1 (en) * 2011-06-15 2012-12-20 Telefonaktiebolaget Lm Ericsson (Publ) Methods and arrangement for dispatching requests
CN110119311B (en) * 2019-04-12 2022-01-04 华中科技大学 Distributed stream computing system acceleration method based on FPGA
US20230221996A1 (en) * 2022-01-13 2023-07-13 Dell Products L.P. Consensus-based distributed scheduler

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798499A (en) * 2018-08-03 2020-02-14 高新兴科技集团股份有限公司 Distributed service coordination system and method
WO2020140683A1 (en) * 2019-01-04 2020-07-09 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, computer device, and storage medium
CN110955506A (en) * 2019-11-26 2020-04-03 浙江电子口岸有限公司 Distributed job scheduling processing method
CN111367642A (en) * 2020-03-09 2020-07-03 中国铁塔股份有限公司 Task scheduling execution method and device
CN114327843A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Task scheduling method and device
CN112860393A (en) * 2021-01-20 2021-05-28 北京科技大学 Distributed task scheduling method and system
WO2022206426A1 (en) * 2021-03-30 2022-10-06 华为云计算技术有限公司 Distributed transaction processing method and system, and related device
CN113032125A (en) * 2021-04-02 2021-06-25 京东数字科技控股股份有限公司 Job scheduling method, device, computer system and computer-readable storage medium
WO2023165105A1 (en) * 2022-03-04 2023-09-07 深圳海星智驾科技有限公司 Load balancing control method and apparatus, electronic device, storage medium, and computer program
CN115129466A (en) * 2022-03-31 2022-09-30 西安电子科技大学 Cloud computing resource hierarchical scheduling method, system, device and medium
CN116055563A (en) * 2022-11-22 2023-05-02 北京明朝万达科技股份有限公司 Task scheduling method, system, electronic equipment and medium based on Raft protocol
CN115840631A (en) * 2023-01-04 2023-03-24 中科金瑞(北京)大数据科技有限公司 RAFT-based high-availability distributed task scheduling method and equipment
CN115951986A (en) * 2023-02-17 2023-04-11 未来电视有限公司 Task scheduling method, node equipment and remote server
CN116149827A (en) * 2023-04-04 2023-05-23 云粒智慧科技有限公司 Distributed task scheduling system and distributed task scheduling execution system
CN116719623A (en) * 2023-06-08 2023-09-08 中国工商银行股份有限公司 Job scheduling method, job result processing method and device
CN116684418A (en) * 2023-08-03 2023-09-01 北京神州泰岳软件股份有限公司 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于分布式调度架构的录波主站系统设计;刘斌;宁雪莹;廖晓春;;电子技术与软件工程;20181105(第20期);全文 *
轻量级遥感数据分布式调度框架DataboxMR;孟祥海 等;《数据与计算发展前沿》;20210620;第03卷(第03期);全文 *

Also Published As

Publication number Publication date
CN117539642A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
US10033570B2 (en) Distributed map reduce network
CN1669001B (en) Method and device for business continuation policy for server consolidation environment
CN107621973B (en) Cross-cluster task scheduling method and device
AU2011320763B2 (en) System and method of active risk management to reduce job de-scheduling probability in computer clusters
US20120331342A1 (en) Adding scalability and fault tolerance to generic finite state machine frameworks for use in automated incident management of cloud computing infrastructures
US20090144800A1 (en) Automated cluster member management based on node capabilities
CN103403674A (en) Performing a change process based on a policy
US20110010343A1 (en) Optimization and staging method and system
US11277317B2 (en) Machine learning to predict quality-of-service needs in an operational data management system
US20210109830A1 (en) Methods and systems for proactive management of node failure in distributed computing systems
US20210044499A1 (en) Automated operational data management dictated by quality of service criteria
US10372701B2 (en) Transaction processor
Pham et al. Deeptriage: Automated transfer assistance for incidents in cloud services
US11550672B1 (en) Machine learning to predict container failure for data transactions in distributed computing environment
CN115080436A (en) Test index determination method and device, electronic equipment and storage medium
CN117539642B (en) Credit card distributed scheduling platform and scheduling method
KR102188987B1 (en) Operation method of cloud computing system for zero client device using cloud server having device for managing server and local server
Rabah et al. Performability evaluation of multipurpose multiprocessor systems: the" separation of concerns" approach
EP3866010A1 (en) Method and system for processing transactions in a block-chain network
Wang et al. Optimization analysis of retrial machine repair problem with server breakdown and threshold recovery policy
US20210183529A1 (en) Method and system for managing operation associated with an object on iot enabled devices
CN112612579A (en) Virtual machine deployment method, storage medium, and computer device
Saadatfar et al. A job submission manager for large-scale distributed systems based on job futurity predictor
WO2018229153A1 (en) Cross-cluster service provision
US11770306B2 (en) Deployment of computer system services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant