CN112433838A - Batch scheduling method, device, equipment and computer storage medium - Google Patents

Batch scheduling method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN112433838A
CN112433838A CN202011344404.4A CN202011344404A CN112433838A CN 112433838 A CN112433838 A CN 112433838A CN 202011344404 A CN202011344404 A CN 202011344404A CN 112433838 A CN112433838 A CN 112433838A
Authority
CN
China
Prior art keywords
machine
partition
weight
total
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011344404.4A
Other languages
Chinese (zh)
Inventor
陈镇涌
江旻
杨杨
李斌
王磊
侯向辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202011344404.4A priority Critical patent/CN112433838A/en
Publication of CN112433838A publication Critical patent/CN112433838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of financial technology (Fintech) and discloses a batch scheduling method, which comprises the following steps: determining a target application in all applications to be run in parallel, and performing primary and standby partition processing on all machines associated with the target application to determine partitions corresponding to all the machines; obtaining a decision factor of each machine, and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine; and acquiring the total data volume to be operated by the target application, calculating the fragment data volume of each partition according to the total data volume and the machine weight of each machine, and performing batch scheduling according to each fragment data volume. The invention also discloses a batch multi-activity device, equipment and a computer storage medium. The invention improves the scheduling efficiency of batch multi-activity.

Description

Batch scheduling method, device, equipment and computer storage medium
Technical Field
The invention relates to the technical field of financial technology (Fintech), in particular to a batch scheduling method, a batch scheduling device, batch scheduling equipment and a computer storage medium.
Background
With the development of computer technology, more and more technologies (big data, distributed, Blockchain, artificial intelligence, and the like) are applied to the financial field, the traditional financial industry is gradually changing to financial technology (Fintech), but due to the requirements of security and real-time performance of the financial industry, higher requirements are also put forward on batch remote multi-activity technologies. The existing batch application program only supports single activity, when a plurality of machine rooms are applied in batches, because the performance of the database accessed by different places across the machine rooms is greatly different from the application performance of the same machine room, and when a plurality of batches are simultaneously started, the batch application across the machine rooms is more active, the delay of the host accessing the database of the disaster backup machine room is increased by 10ms more than that of the host inquiring at each time, so that the running time of the batch application program of different places across the machine rooms is longer than that of the batch application program of the same machine room when batch running together, and the problem of slow running time of the whole batch is caused. Therefore, how to improve the scheduling efficiency of batch multi-activity becomes a problem which needs to be solved urgently at present.
Disclosure of Invention
The invention mainly aims to provide a batch scheduling method, a batch scheduling device, batch scheduling equipment and a computer storage medium, and aims to solve the technical problem of improving the scheduling efficiency of batch multi-activity.
In order to achieve the above object, the present invention provides a batch scheduling method, including the steps of:
determining a target application in all applications to be run in parallel, and performing primary and standby partition processing on all machines associated with the target application to determine partitions corresponding to all the machines;
obtaining a decision factor of each machine, and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine;
and acquiring the total data volume to be operated by the target application, calculating the fragment data volume of each partition according to the total data volume and the machine weight of each machine, and performing batch scheduling according to each fragment data volume.
Optionally, the step of obtaining a decision factor of each machine, and calculating a machine weight of each machine according to a preset configuration table and the decision factor of each machine includes:
sequentially traversing each machine, acquiring partition types, thread numbers, memory information and reading efficiency of the traversed machines, and extracting area weights corresponding to the partition types, thread weights corresponding to the thread numbers, memory weights corresponding to the memory information and reading efficiency weights corresponding to the reading efficiency from a preset configuration table, wherein the decision factors comprise the partition types, the thread numbers, the memory information and the reading efficiency;
and calculating a weight product among the partition weight, the thread weight, the memory weight and the reading efficiency weight, and taking the weight product as a machine weight of the traversed machine.
Optionally, the step of calculating the partitioned data volume of each partition according to the total data volume and the machine weight of each machine includes:
determining the machine weight ratio of each partition according to the machine weight of each machine and the partition corresponding to each machine;
traversing each machine weight ratio, calculating a first product of the total data volume and the traversed machine weight ratio, taking the first product as total partition data of a partition corresponding to the traversed machine weight ratio, and determining the fragment data volume of each partition based on the total partition data.
Optionally, the step of determining a machine weight ratio of each partition according to the machine weight of each machine and the partition corresponding to each machine includes:
calculating partition machine weight of each partition based on the machine weight of each machine and the partition corresponding to each machine;
calculating the sum of the partition machine weights, taking the sum as a total machine weight, traversing the partition machine weights, calculating a first proportion value of the traversed partition machine weight and the total machine weight, and taking the first proportion value as the machine weight ratio of the partition corresponding to the traversed partition machine weight.
Optionally, the step of determining the fragmentation data volume of each partition based on the total data of each partition includes:
traversing each partition, determining the total machine number of all machines in the traversed partition and the thread number of each machine, calculating a second product of the total machine number and the thread number of the machine, and taking the second product as the total fragment number;
and determining total traversal partition data corresponding to the traversed partitions based on the total partition data, calculating a second proportion value of the total traversal partition data and the total number of the partitions, and taking the second proportion value as the data volume of the traversed partitions.
Optionally, the step of performing batch scheduling according to each of the sliced data volumes includes:
calculating a third product of the total fragment quantity of the standby partitions in each partition and a preset fragment executable number;
determining total partition data of spare partitions in each partition, and if the data quantity of the total partition data of the spare partitions is smaller than or equal to the third product, performing batch scheduling according to machines in the spare partitions; or the like, or, alternatively,
and if the data volume of the total partition data of the backup partition is larger than the third product, executing a step of performing batch scheduling according to the data volume of each partition.
Optionally, after the step of performing batch scheduling according to each of the sliced data volumes, the method includes:
if the partition of the running machine is a main partition, registering and monitoring batching services for the running machine, and detecting whether the partition of the running machine is converted from the main partition into a standby partition or not based on a preset time interval;
and if the partition of the running machine is converted from the main partition into the standby partition, canceling the monitoring batching service of the partition of the running machine, executing the steps of obtaining the decision factors of the machines and calculating the machine weight of the machines according to a preset configuration table and the decision factors of the machines.
In addition, to achieve the above object, the present invention also provides a batch multi-activity device, including:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a target application in all applications to be run in parallel and performing primary and standby partition processing on all machines associated with the target application to determine partitions corresponding to all the machines;
the acquisition module is used for acquiring the decision factor of each machine and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine;
and the operation module is used for acquiring the total data volume to be operated by the target application, calculating the fragment data volume of each partition according to the total data volume and the machine weight of each machine, and performing batch scheduling according to each fragment data volume.
In addition, to achieve the above object, the present invention also provides a batch multi-event device, including: the system comprises a memory, a processor and a batch multi-active program which is stored on the memory and can run on the processor, wherein when the batch multi-active program is executed by the processor, the steps of the batch scheduling method are realized.
In addition, to achieve the above object, the present invention further provides a computer storage medium, where a batch multi-live program is stored, and when being executed by a processor, the batch multi-live program implements the steps of the batch scheduling method as described above.
The method comprises the steps of determining a target application in all applications to be run in parallel, and performing primary and standby partition processing on all machines associated with the target application to determine partitions corresponding to all the machines; obtaining a decision factor of each machine, and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine; and acquiring the total data volume to be operated by the target application, calculating the fragment data volume of each partition according to the total data volume and the machine weight of each machine, and performing batch scheduling according to each fragment data volume. The method comprises the steps of performing active-standby partition processing on all machines associated with a target application in all applications to be operated in parallel, determining partitions corresponding to all the machines, calculating the machine weight of each machine according to a decision factor of each machine, calculating the fragment data volume of each partition according to the total data volume to be operated of the target application and the weight of each machine, and performing batch scheduling processing according to each fragment data volume, so that the phenomenon that the operation time of the whole batch is slowed down due to the fact that the operation time of different machines is inconsistent in the prior art is avoided, dividing the total data according to the machine weight, the operation time consistency of each machine is guaranteed, and the scheduling efficiency of the batch is improved.
Drawings
FIG. 1 is a schematic diagram of a batch multi-live device architecture of a hardware runtime environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a batch scheduling method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus module of the batch multi-activity apparatus of the present invention;
FIG. 4 is a flowchart illustrating a batch scheduling method according to the present invention;
FIG. 5 is a schematic flow chart of a heartbeat report record in the batch scheduling method of the present invention;
FIG. 6 is a schematic diagram of a comparison process between a slicing method of the prior art and a slicing method of the present invention;
FIG. 7 is a schematic flow chart illustrating data distribution in the batch scheduling method according to the present invention;
fig. 8 is a schematic flow chart of the method for scheduling in batches after the primary/secondary switching.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic diagram of a batch multi-live device structure of a hardware operating environment according to an embodiment of the present invention.
The batch multi-live equipment in the embodiment of the invention can be a PC (personal computer) or server equipment, and a Java virtual machine runs on the batch multi-live equipment.
As shown in fig. 1, the batch multi-activity apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the batch multi-activity apparatus configuration shown in fig. 1 does not constitute a limitation of the apparatus and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a bulk multi-active program.
In the batch multi-active device shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the batch multi-live program stored in the memory 1005 and perform the operations in the batch scheduling method described below.
Based on the hardware structure, the embodiment of the batch scheduling method is provided.
Referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of a batch scheduling method of the present invention, where the method includes:
step S10, determining a target application in all applications to be run in parallel, and performing primary and standby partition processing on all machines associated with the target application to determine partitions corresponding to all the machines;
at present, the existing batch program cannot really achieve multiple activities in different places, can only achieve the release of backup applications in different places, but is not started, the simultaneous starting of the applications in different places can participate in batch pulling together, and the downloading and exporting of files can lead to time consumption increase when a database is read and written across IDCs (data centers). Therefore, a batch scheduling method is provided to realize that N centers of N rooms are more alive, realize that batch programs are executed together, automatically identify dynamic capacity-expansion clusters of different rooms, and automatically detect the performance of machines of different rooms in accessing a database across rooms. In addition, in this embodiment, a prediction-based machine feature weight adjustment algorithm is further provided, so that multiple database rooms can be dynamically switched without manual intervention, and running batch machine resource allocation is automatically switched. Referring to fig. 4, firstly, determining all applications to be operated in parallel by the N Center in the N-place machine room, and performing heartbeat report on the machine IP and the DCN (Data Center Node) in each machine room, starting an application execution performance analysis program to analyze and feed back the performance of the machine, then calculating the operation weight according to the performance of the machine, calculating a reasonable fragmentation Data volume according to the number and the weight of the machine, performing fragmentation according to the number of the weight of each machine in the main and standby partitions, automatically identifying the switching resource distribution condition when disaster backup switching occurs, and re-performing heartbeat report on the respective DCN.
Therefore, in this embodiment, all applications to be run in parallel by the N center in the N machine rooms are determined, and the heartbeat report machine IP and the DCN in each machine room are performed, as shown in fig. 5, a heartbeat report record is performed in the database according to all batch processing applications a in the data center node, and is stored in the database. That is, all machines associated with the target application in all applications are determined, that is, all machines to which the target application needs to be applied when the target application runs are determined, the machines are used as all machines associated with the target application, and the main and standby partition processing is performed on the machines according to the target application to determine partitions corresponding to the machines, such as a main partition, a standby partition, or a main partition, a same-city standby partition, a different-place standby partition, and the like.
Step S20, obtaining a decision factor of each machine, and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine;
in this embodiment, the application execution performance analysis program of the target application is started to obtain the performance characteristics of each machine, that is, to obtain the decision factor of each machine, and when the machine weight is calculated according to the decision factor, the method of all machines is the same, and the decision factor (which may include partition type, thread number, memory information, and reading efficiency) of the machine is obtained first, as shown in table 1.
Decision factor Decision conditions
Type of IDC (partition type) Master main partition and Slave standby partition
Core multi-thread number of CPU (thread number) Number of CPU threads
Memory information Memory data
Database reading efficiency (reading efficiency) Last 100 strokes consumed time
TABLE 1
When obtaining the weight of the partition type, that is, the weight of the IDC type, a configuration table is newly added, a current DCN is configured, it is determined whether the machine belongs to a Master partition or a Slave partition, the query is performed through an interface provided by the issuing system, or a query result is modified according to a user requirement, that is, a weight corresponding to the partition type of the machine is queried in the configuration table, and in this embodiment, the Master partition weight may be set to 10 and the Slave partition weight may be set to 5. Part of the configuration table may be as shown in table 2 below.
Partition type Weight of
Master 10
Same city Slave 8
Slave in different place 5
TABLE 2
When the weight of the thread number, that is, the weight of the core multi-thread number of the CPU, is obtained, the reference CPS core number is set to 4 in the configuration table, the core number of the standby machine is set to 2, and the weight corresponding to the thread number of the machine is queried in the configuration table. In addition, in the embodiment, the performance of the machine is related to the model of the computer selected by the machine room, and the machine with poor CPU performance can be modified according to the actual requirement.
Part of the configuration table may be as shown in table 3 below.
Number of threads Weight of
4 4
2 2
TABLE 3
When the weight of the memory information is obtained, a reference memory may be set in the configuration table as a 2G memory, and the weight corresponding to the memory information of the machine may be queried in the configuration table. Part of the configuration table may be as shown in table 4 below.
Memory information Weight of
4G 4
2G 2
1G 1
TABLE 4
When the weight of the reading efficiency, namely the weight of the database reading efficiency, is obtained, corresponding setting needs to be performed in a configuration table, and network verification exists when a DCN application of a cross-machine room IDC accesses other IDC databases, the time consumption is about 10ms, the time consumption of the same machine room IDC is about 2ms, and the difference is large, so that in order to avoid instability of query, the average time can be adopted to count the 100 time consumption of querying the database, and the average time consumption of the query of the same IDC is 2 ms; and (4) inquiring the IDC in a cross-IDC mode, taking 10ms on average, counting the IDC into a configuration table, and inquiring the weight corresponding to the reading efficiency of the machine after the configuration table. Part of the configuration table may be as shown in table 5 below.
Reading efficiency (average number of readings per 100 pens) Weight of
2ms 100
5ms 40
20ms 10
TABLE 5
And after the weights corresponding to the decision factors of the machine are obtained, the machine weight of the machine can be calculated. Namely, the machine weight is the partition type weight and the trace weight, the memory information weight and the reading efficiency weight. The partition type weight is a weight corresponding to the partition type, the thread number weight is a weight corresponding to the thread number, the memory information weight is a weight corresponding to the memory information, and the reading efficiency weight is a weight corresponding to the reading efficiency.
Also in the present embodiment, the machine weights of all machines may be calculated in the above manner to obtain the machine weights of the respective machines. For example, Master type weight 4 metric weight 4G memory weight 2ms read efficiency weight 10 4 metric weight 100 16000.
The city Slave type weight 4 trace count weight 4G memory weight 10ms read efficiency weight 8 4 40 5120.
The weight of the different place Slave is 2 route weights 2G memory weight 20ms reading efficiency weight 5 2 10 200.
Step S30, obtaining a total data size of the target application to be run, calculating a sliced data size of each partition according to the total data size and a machine weight of each machine, and performing batch scheduling according to each sliced data size.
After the machine weights of the machines are obtained through calculation, the total weight of all the active and standby machines, that is, the sum of the machine weights of all the machines associated with the target application, needs to be calculated. That is, Sum (total weight of all main and standby machines) ═ Sum (total weight of all masters) + Sum (total weight of all Slave machines in the same city) + Sum (total weight of all Slave machines in different places).
And calculating the partition weight ratio of each partition according to the total weight of all the main machines and the standby machines. Such as:
master machine weight ratio is Sum (Master machine weight)/Sum (total weight of all Master and slave machines).
The ratio of the same-city Slave machine weight is Sum (same-city Slave machine weight)/Sum (total weight of all the main and standby machines).
The ratio of the weight of the remote Slave machine is Sum (weight of the remote Slave machine)/Sum (total weight of all the main and standby machines).
Then, calculating total partition data of each partition according to the obtained total data amount to be run of the target application, such as:
sum (Master partition total data) ═ Sum (total data volume) × Master machine weight ratio;
sum (total data of same-city Slave partitions) ═ Sum (total data volume) × same-city Slave machine weight ratio;
sum (total data amount) of different place Slave partition is equal to Sum (total data amount) of different place Slave machine weight.
In addition, in order to realize minimum data volume control of each sub-fragment, and avoid the phenomenon that after a Slave machine is added, the data volume of the sub-fragments which are averagely distributed to each machine by total data is too small, and unnecessary resource overhead is caused by more fragment scheduling, different schemes can be adopted according to different scenes. For example, if there are 100 total data, there are 100 sub-slices available for the resource to call, and each sub-slice executes, each slice executes only one piece of data. On the aspect of a slicing algorithm, the data volume of each sub-slice is controlled and cannot be lower than 20 strokes/slice; and the same 100 data, the resource is only allocated to 5 slices, and each slice has 20 data. The scene processing with less data amount comprises three types, namely scene 1, and the total data amount is only enough for executing the Slave partition; scene 2, the total data size is only enough to execute the Slave partition and the partial Master partition; scene 3 has no data. And the principle of data fragmentation is that the Slave partition executes first to verify that the machine environment of the Slave partition is available. The method and the device avoid the situation that data is executed only in a Master partition, a Master partition and a Slave partition are suddenly switched to, and the Slave partition is found to be unavailable, so that a large amount of data is executed in the Slave partition to cause problems, and the environment configuration environment needs to be manually checked. Therefore, for scene 1 and scene 2, the total data size of the Slave partition, the bottom of the pocket is not less than the total slice number of the Slave 20, and if less than this data, all the data is executed in the Slave partition. Alternative schemes are as follows: the total fragments of the Slave can be specially configured, the number of the fragments of each machine of the Slave with small data quantity can be configured to be 1, and each machine only runs 1 machine, so that each machine is guaranteed to participate in executing data processing. The remaining data is executed by the Master data. Scene 3 has no data, and only one sub-slicing task needs to be run without any data, and the Master machine participates in execution by default.
When the total partition data of each partition is obtained through calculation and all the machines in each partition need to be executed, the total partition of each partition may be calculated first, that is, the total partition is the number of threads of each machine of the partition and the total number of machines in the partition. And the thread number of each machine in the partition is in direct proportion to the number of the CPU cores, and a user can also adjust the thread number of a certain partition through a configuration table according to the requirement of the user. And the amount of data per slice is total data per partition/total slice. Namely:
the data volume of each fragment of the Master partition is Sum (Master partition total data)/Master total fragment number;
the data volume of each partition of the city-level Slave partition is Sum (total data of the city-level Slave partition)/the total number of the partitions of the city-level Slave;
the data volume of each slice of the remote Slave partition is Sum (total data of the remote Slave partition)/the total number of the remote Slave slices.
And the data information of each fragment obtained by calculation is stored in the database, and the source of the data processed by each fragment is directly obtained from the database, so that repeated calculation is avoided each time. And one-time calculation is realized, and data consistency is ensured globally. And each machine can carry out batch dispatching according to each fragment data volume obtained by calculation, namely carrying out single-thread execution data and multi-thread execution data. Wherein, for single-threaded data, the data can be executed only in the starting machine. The batch system interaction is realized through files, and multi-fragment and multi-thread batch running can be realized for a remote supporting machine; the file needs to be imported into a database and then is subjected to multi-fragment processing; after the processing is finished, the data of the downstream is inquired from the database and then is exported to the local disk, and then the data is pushed to the downstream system. If the data size of the file is large, a large delay is also needed for importing the file in the Slave partition. The principle of simultaneous batch control is that for a machine of a standby DCN, the receiving task of a file is not participated, the task is imported, and the task is exported.
In the embodiment, the primary and standby scheduling tasks are processed separately, namely, the Master scheduling queue is only responsible for distributing Master partition data to the Master machine; same city Slave scheduling queue: the method is only responsible for distributing the data of the same-city Slave partitions to the same-city Slave machines; a remote Slave scheduling queue: and the system is only responsible for distributing the data of the remote Slave partition to the remote Slave machine. The dispatching mode of batch dispatching in the partition may be that the batch dispatching is sequentially distributed to the machines according to the sequence of the tasks, that is, the machine where the Step subtask i is located is (the fragment sequence number i% of the total machine number of the partition corresponding to the Step). In this embodiment, when the multi-fragment scheduling is performed by a parallel thread task, the machine sets for querying the heartbeat report table should be consistent, and the heartbeat for querying the database needs to query the return result set in an IP ascending order. And after the Master Slave partition machines are screened in the memory processing process, the bottoms are processed again and the result sets are processed in an ascending order. In the fragmentation process, machine Crash hang may occur, a machine set needs to be cached in a main process of the fragmentation and stored in a JVM thread stack of a main thread, and each multi-fragment Step is refreshed. For example, as shown in fig. 7, the batch application a in the client distributes data to the servers of each batch application a according to the logical fragmentation module.
In addition, in order to assist understanding of the manner of performing multi-slicing according to weight in the present embodiment, the following description will be made by way of example. For example, as shown in fig. 6, in the prior art, the fragment is read at once and is proportionally fragmented according to the total fragment book, that is, the system throughput can be divided into 4 equal parts, and each part occupies 1/4. Every two equal parts are processed by one thread. In the embodiment, after being modified, the data volume of the main partition and the standby partition is divided according to the weight ratio of the DCN, for example, the main partition and the standby partition are divided according to the equal proportion of the distributed data volume, for example, the ratio of the main partition to the standby partition is 3:1, and the efficiency of batch processing is higher compared with that of the prior art.
In this embodiment, by determining a target application in all applications to be run in parallel, performing primary-standby partition processing on all machines associated with the target application to determine partitions corresponding to the machines; obtaining a decision factor of each machine, and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine; and acquiring the total data volume to be operated by the target application, calculating the fragment data volume of each partition according to the total data volume and the machine weight of each machine, and performing batch scheduling according to each fragment data volume. The method comprises the steps of performing active-standby partition processing on all machines associated with a target application in all applications to be operated in parallel, determining partitions corresponding to all the machines, calculating the machine weight of each machine according to a decision factor of each machine, calculating the fragment data volume of each partition according to the total data volume to be operated of the target application and the weight of each machine, and performing batch scheduling processing according to each fragment data volume, so that the phenomenon that the operation time of the whole batch is slowed down due to the fact that the operation time of different machines is inconsistent in the prior art is avoided, dividing the total data according to the machine weight, the operation time consistency of each machine is guaranteed, and the scheduling efficiency of the batch is improved.
Further, based on the first embodiment of the batch scheduling method of the present invention, a second embodiment of the batch scheduling method of the present invention is provided. This embodiment is a step S20 of the first embodiment of the present invention, which is a refinement of the step of obtaining the decision factor of each machine and calculating the machine weight of each machine according to the preset configuration table and the decision factor of each machine, and includes:
step a, traversing each machine in sequence, acquiring partition types, thread numbers, memory information and reading efficiency of the traversed machine, and extracting region weights corresponding to the partition types, thread weights corresponding to the thread numbers, memory weights corresponding to the memory information and reading efficiency weights corresponding to the reading efficiency from a preset configuration table, wherein decision factors comprise the partition types, the thread numbers, the memory information and the reading efficiency;
in this embodiment, calculating the machine weight of each machine may be sequentially traversing each machine, and obtaining each decision factor of the traversed machine, such as partition type, thread number, memory information, and reading efficiency, then extracting a configuration table set in advance from a database, and extracting a weight, that is, a region weight, corresponding to the partition type of the traversed machine from the configuration table; the weight corresponding to the thread number, namely the thread weight; the weight corresponding to the memory information, namely the memory weight; and the weight corresponding to the reading efficiency, namely the reading efficiency weight.
And b, calculating a weight product among the partition weight, the thread weight, the memory weight and the reading efficiency weight, and taking the weight product as the machine weight of the traversed machine.
After the weights corresponding to the decision factors of the traversed machine are obtained, the machine weight of the machine can be calculated. Namely, the machine weight is the partition type weight and the trace weight, the memory information weight and the reading efficiency weight. That is, the machine weight of the traversed machine is taken as the weight product between the partition weight, the thread weight, the memory weight, and the read efficiency weight. And the machine weights of all machines may be calculated in the manner described above to obtain the machine weights of the individual machines.
In this embodiment, the partition type, the thread number, the memory information, and the reading efficiency of each traversed machine are obtained by traversing each machine, a weight product of a partition weight corresponding to the partition type, a thread weight corresponding to the thread number, a memory weight corresponding to the memory information, and a reading efficiency weight corresponding to the reading efficiency is calculated according to the configuration table, and the weight product is used as a machine weight of the traversed machine, thereby ensuring the validity of the obtained machine weight.
Further, the step of calculating the sliced data size of each partition according to the total data size and the machine weight of each machine includes:
step c, determining the machine weight ratio of each partition according to the machine weight of each machine and the partition corresponding to each machine;
in this embodiment, when calculating the total partition data of each partition, the partition where each machine is located, such as the main partition, the spare partition, and the like, may be determined first, and then the machine weight ratio of each partition is calculated according to the machine weight of each machine in different partitions, and the specific calculation manner may be as described in steps e and f, which is not described herein.
Step d, traversing each machine weight ratio, calculating a first product of the total data volume and the traversed machine weight ratio, taking the first product as total partition data of a partition corresponding to the traversed machine weight ratio, and determining the partitioned data volume of each partition based on the total partition data.
After the machine weight ratios of all the machines are obtained through calculation, the machine weight ratios can be traversed, the product of the total data volume to be operated by the target application and the traversed machine weight ratios, namely a first product, is calculated, and the first product is used as the total partition data of the partitions corresponding to the traversed machine weight ratios. And the total partition data of all the partitions can be calculated in the mode. And then determining the fragment data volume of each partition according to the number of machines in each partition and the total data of each partition.
In this embodiment, the machine weight ratio of each partition is determined according to each machine weight and the partition corresponding to each machine, the machine weight ratio is traversed, a first product of the total data volume and the traversed machine weight ratio, that is, total data of the partition is calculated, and the validity of the obtained fragment data volume is ensured according to the fragment data volume of each partition of the total data of each partition.
Specifically, the step of determining the machine weight ratio of each partition according to the machine weight of each machine and the partition corresponding to each machine includes:
step e, calculating partition machine weight of each partition based on machine weight of each machine and partition corresponding to each machine;
in this embodiment, when calculating the machine weight ratio of each partition, it is necessary to determine the partition corresponding to each machine, and calculate the sum of the machine weights of each machine in each partition as the partition machine weight.
Step f, calculating the sum of the partition machine weights, taking the sum as a total machine weight, traversing the partition machine weights, calculating a first proportion value of the traversed partition machine weights and the total machine weight, and taking the first proportion value as a machine weight ratio of the partition corresponding to the traversed partition machine weights.
In this embodiment, after calculating the weights of the partitions, the Sum of the weights of the partitions needs to be calculated, and the Sum is used as the total weight of the machines, that is, the total weight of all the machines in the main partition and the spare partition, Sum (Sum of all the machines in Master) + Sum (Sum of all the machines in Slave in the same city) + Sum (Sum of all the machines in Slave in different places). Traversing each partition machine weight, calculating a ratio of the traversed partition machine weight to the total machine weight, namely a first ratio, and accounting the first ratio for the machine weight of the partition corresponding to the traversed partition machine weight, such as: master machine weight ratio is Sum (Master machine weight)/Sum (total weight of all Master and slave machines). The ratio of the same-city Slave machine weight is Sum (same-city Slave machine weight)/Sum (total weight of all the main and standby machines). The ratio of the weight of the remote Slave machine is Sum (weight of the remote Slave machine)/Sum (total weight of all the main and standby machines).
In this embodiment, the partition machine weight of each partition is calculated according to each machine weight, then the total machine weight is calculated, each partition machine weight is traversed, and the first proportional value of the traversed partition machine weight and the total machine weight is calculated to determine the machine weight ratio, so that the accuracy of the obtained machine weight ratio is ensured.
Further, the step of determining the fragmentation data volume of each partition based on the total data of each partition includes:
step g, traversing each partition, determining the total machine number of all machines in the traversed partition and the thread number of each machine, calculating a second product of the total machine number and the thread number of the machine, and taking the second product as the total fragment number;
in this embodiment, after the total data of each partition is obtained, each partition may be traversed, the total machine number (i.e., the total number of machines) of all the machines in the traversed partition and the thread number of each machine are determined, a second product of the total machine number and the thread number of the machine is calculated, and the second product is used as the total number of the fragments. That is, the total number of slices is the number of threads per machine of the partition (i.e., the number of threads of the machine) × the total number of machines.
And h, determining total data of traversed partitions corresponding to traversed partitions based on the total data of the partitions, calculating a second proportion value of the total data of the traversed partitions and the total number of the partitions, and taking the second proportion value as the data volume of the traversed partitions.
In this embodiment, it is further required to determine, from the total data of each partition, total data of the partition corresponding to the traversed partition, that is, traverse the total data of the partition, then calculate a ratio value between the total data of the traversed partition and the total number of the fragments, that is, a second ratio value, and use the second ratio value as the data amount of each fragment in the traversed partition, for example: the data volume of each fragment of the Master partition is Sum (Master partition total data)/Master total fragment number; the data volume of each partition of the city-level Slave partition is Sum (total data of the city-level Slave partition)/the total number of the partitions of the city-level Slave; the data volume of each slice of the remote Slave partition is Sum (total data of the remote Slave partition)/the total number of the remote Slave slices.
In this embodiment, the total number of fragments is determined by traversing each partition, calculating a second product of the total number of machines in the traversed partition and the thread number of the machine, determining the total data of the traversed partition corresponding to the traversed partition, and calculating a second proportional value of the total data of the traversed partition and the total number of the fragments to determine the fragment data volume of the traversed partition, thereby ensuring the accuracy of the acquired fragment data volume.
Further, the step of performing batch scheduling according to the data volume of each fragment includes:
step k, calculating a third product of the total fragment quantity of the standby partitions in each partition and a preset fragment executable number;
in this embodiment, it is necessary to determine the total number of fragments of the standby partition in the calculated total number of fragments, set a certain number of fragment executable numbers (i.e., the number that each fragment can execute) for each fragment in the standby partition, and calculate a third product of the total number of fragments of the standby partition and the preset fragment executable numbers.
Step m, determining total data of the partitions in each partition, and if the data quantity of the total data of the partitions in each partition is smaller than or equal to a third product, performing batch scheduling according to the machines in the partitions; or the like, or, alternatively,
after the third product is obtained through calculation, it is further necessary to determine total partition data of the spare partitions in each partition, and then detect whether the total partition data in the spare partitions is less than or equal to the third product, and if the total partition data in the spare partitions is less than or equal to the third product, all data of the target application may be placed in the spare partitions for execution, that is, machines in the decomposed backup partitions perform batch scheduling.
And n, if the data quantity of the total partition data of the backup partition is larger than a third product, executing a step of performing batch scheduling according to the data quantity of each partition.
And when the data volume of the total data of the partitions of the spare partition is found to be larger than the third product through judgment, the step of performing batch scheduling according to the data volume of each partition can be continuously executed, namely, all the machines of the partitions participate in the batch scheduling together.
In this embodiment, the third product of the total number of the fragments of the spare partition and the preset executable number of the fragments is calculated, when the data amount of the total data of the partitions of the spare partition is less than or equal to the third product, the batch scheduling is directly performed according to the machines in the spare partition, and when the data amount of the total data of the partitions of the spare partition is greater than the third product, the step of performing the batch scheduling according to the data amount of each fragment is continuously performed, so that the normal operation of the batch scheduling is ensured.
Further, after the step of performing batch scheduling according to the data size of each fragment, the method includes:
step p, if the partition of the running machine is a main partition, registering and monitoring batching services for the running machine, and detecting whether the partition of the running machine is converted from the main partition into a standby partition or not based on a preset time interval;
in this embodiment, the machine in the backup partition monitors the scheduling MQ queue of the batch task, and the batch pulling task message is randomly sent to one of all monitoring machines through the MQ queue; the Slave machine can control the machine which starts batch processing as long as the Slave machine does not monitor the queue of batch pulling. Therefore, when the target application is started, whether the partition where the current machine (i.e. the running machine) is located is a Master partition (i.e. a Main partition) or not is judged, if so, the running machine is registered and monitored for batching service, and if not, the running machine is not registered and monitored for batching service. And also detects whether the partition of the running machine is converted from the primary partition to the standby partition according to a preset time interval (any time interval set in advance by a user, such as 1 minute).
And q, if the partition of the running machine is converted from a main partition into a standby partition, canceling the monitoring batching service of the partition of the running machine, executing the step of obtaining the decision factor of each machine, and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine.
And when the partition for operating the machine is judged to be the main partition and the monitoring batching service is not registered successfully, initiating the monitoring operation of the batching service and updating the registration result. If the partition is the primary partition and has been successfully registered, skip no processing. If the partition is a standby partition and the registration is successful, canceling the start batch service monitoring. If the partition is a spare partition and no services are registered, skip no processing. And in order to prevent the batching service from not updating and canceling the registration in time in the process of switching the main and standby machines, a batching monitoring and checking mechanism is added, if the batching machine is not a main partition machine, the batching is directly refused, and the main partition is prompted to allow the batching. And after the switching of the main partition and the standby partition is determined to occur, the steps of obtaining the decision factors of the machines and calculating the machine weight of each machine according to the preset configuration table and the decision factors of each machine are continuously executed. In order to readjust the amount of sliced data for the various partitions.
In this embodiment, when it is determined that the partition in which the running machine exists is the primary partition, the monitoring batching service is registered in the running machine, and when the partition in which the running machine exists is converted from the primary partition to the standby partition, the monitoring batching service is cancelled, and the step of re-acquiring the machine weight of each machine is continuously executed, so that the batch scheduling can be performed under the condition of primary-standby conversion.
It should be noted that the batch scheduling method in the present invention also supports batch backtracking, that is, all data generated during batch scheduling is stored in the database, so as to ensure that the amount of data backtracking after batch backtracking is consistent. And the execution of the fragment scheduling data is stateful, each step of updating is performed, the processing condition is stored in the database, and the processing condition comprises total data volume, total fragment data, data volume of each sub-fragment, total data volume of the Master partition, total fragment data of the Master partition, total data volume of the Slave partition and total fragment data of the Slave partition. And recording the data information processed by each fragment, and recording each data processed by Step with the granularity of Chunk, wherein all the primary KEYs are kept in a YAK _ KEY _ CONTEXT table. The current Step executes the data to which pen. And when the batching is interrupted, the fragmentation algorithm is kept for fragmentation according to the number of the main and standby fragments recorded before. And the dispatched machine is stateless, and acquires the machine information again for fragmentation. And continuing to execute the steps from the data when the data is executed before batch breaking.
In this embodiment, the definition of the primary and secondary partitions is determined by whether the database and the machine are in the same machine room. If the batch application is switched between the main database and the standby database in the batch execution data process, the main database and the standby database of the machine are also switched together. After the remote disaster recovery switching, the remote IDCs are changed into the same-city IDCs, the same-city IDCs are changed into the remote IDCs, and the positions are correspondingly exchanged. And after the weight is updated by polling every minute, the weights are exchanged, and corresponding tasks are correspondingly exchanged during multi-slice scheduling. For example, as shown in fig. 8, the system includes a machine room DCNIO1, DCNIO2, database IO1 (backup), database IO2 (primary), APPIP1 (backup), APPIP2 (backup), APPIP3 (backup), APPIP4 (backup), APPIP5 (backup), APPIP6 (backup), APPIP7 (backup) and APPIP8 (backup), a scheduling platform, a scheduling queue, a primary partition and a configuration table. When the main-standby switching batch running and multi-activity are carried out, if the DCN of the database in the standby partition is abnormal, after the DCN is switched to the database with the IO2 as the main DCN, operation and maintenance are carried out to modify the MasterDCN to be the IO2, batch pulling service registration is carried out through a scheduling queue, different IDCs are applied to the database and do not participate in batch pulling service registration, then daily batch cutting is carried out, and the multi-activity DCN and the weight are obtained through a configuration table.
And because the main and standby performances after switching are different, the corresponding weight information needs to be counted again. And the sub-partition scheduling queue re-identifies the main machine and the standby machine, re-divides the data volume of each sub-partition of the main partition and the standby partition, re-initiates a request to each main machine and the standby machine through the independent main partition scheduling queue and invokes a thread to execute sub-tasks of the sub-partitions.
The present invention also provides a batch scheduling apparatus, referring to fig. 3, the batch scheduling apparatus includes:
a determining module a10, configured to determine a target application in all applications to be run in parallel, and perform primary-standby partition processing on all machines associated with the target application to determine a partition corresponding to each of the machines;
an obtaining module a20, configured to obtain a decision factor of each machine, and calculate a machine weight of each machine according to a preset configuration table and the decision factor of each machine;
the operation module a30 is configured to obtain a total data amount to be operated by the target application, calculate a sliced data amount of each partition according to the total data amount and a machine weight of each machine, and perform batch scheduling according to each sliced data amount.
Optionally, the obtaining module a20 is configured to:
sequentially traversing each machine, acquiring partition types, thread numbers, memory information and reading efficiency of the traversed machines, and extracting area weights corresponding to the partition types, thread weights corresponding to the thread numbers, memory weights corresponding to the memory information and reading efficiency weights corresponding to the reading efficiency from a preset configuration table, wherein the decision factors comprise the partition types, the thread numbers, the memory information and the reading efficiency;
and calculating a weight product among the partition weight, the thread weight, the memory weight and the reading efficiency weight, and taking the weight product as a machine weight of the traversed machine.
Optionally, an operation module a30 for:
determining the machine weight ratio of each partition according to the machine weight of each machine and the partition corresponding to each machine;
traversing each machine weight ratio, calculating a first product of the total data volume and the traversed machine weight ratio, taking the first product as total partition data of a partition corresponding to the traversed machine weight ratio, and determining the fragment data volume of each partition based on the total partition data.
Optionally, an operation module a30 for:
calculating partition machine weight of each partition based on the machine weight of each machine and the partition corresponding to each machine;
calculating the sum of the partition machine weights, taking the sum as a total machine weight, traversing the partition machine weights, calculating a first proportion value of the traversed partition machine weight and the total machine weight, and taking the first proportion value as the machine weight ratio of the partition corresponding to the traversed partition machine weight.
Optionally, an operation module a30 for:
traversing each partition, determining the total machine number of all machines in the traversed partition and the thread number of each machine, calculating a second product of the total machine number and the thread number of the machine, and taking the second product as the total fragment number;
and determining total traversal partition data corresponding to the traversed partitions based on the total partition data, calculating a second proportion value of the total traversal partition data and the total number of the partitions, and taking the second proportion value as the data volume of the traversed partitions.
Optionally, an operation module a30 for:
calculating a third product of the total fragment quantity of the standby partitions in each partition and a preset fragment executable number;
determining total partition data of spare partitions in each partition, and if the data quantity of the total partition data of the spare partitions is smaller than or equal to the third product, performing batch scheduling according to machines in the spare partitions; or the like, or, alternatively,
and if the data volume of the total partition data of the backup partition is larger than the third product, executing a step of performing batch scheduling according to the data volume of each partition.
Optionally, an operation module a30 for:
if the partition of the running machine is a main partition, registering and monitoring batching services for the running machine, and detecting whether the partition of the running machine is converted from the main partition into a standby partition or not based on a preset time interval;
and if the partition of the running machine is converted from the main partition into the standby partition, canceling the monitoring batching service of the partition of the running machine, executing the steps of obtaining the decision factors of the machines and calculating the machine weight of the machines according to a preset configuration table and the decision factors of the machines.
The method executed by each program unit can refer to each embodiment of the batch scheduling method of the present invention, and is not described herein again.
The invention also provides a computer storage medium.
The computer storage medium of the present invention stores a batch multi-live program, which when executed by a processor implements the steps of the batch scheduling method as described above.
The method implemented when the batch multi-live program running on the processor is executed may refer to each embodiment of the batch scheduling method of the present invention, and details are not described here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A batch scheduling method is characterized by comprising the following steps:
determining a target application in all applications to be run in parallel, and performing primary and standby partition processing on all machines associated with the target application to determine partitions corresponding to all the machines;
obtaining a decision factor of each machine, and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine;
and acquiring the total data volume to be operated by the target application, calculating the fragment data volume of each partition according to the total data volume and the machine weight of each machine, and performing batch scheduling according to each fragment data volume.
2. The batch scheduling method of claim 1, wherein the step of obtaining the decision factor of each machine and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine comprises:
sequentially traversing each machine, acquiring partition types, thread numbers, memory information and reading efficiency of the traversed machines, and extracting area weights corresponding to the partition types, thread weights corresponding to the thread numbers, memory weights corresponding to the memory information and reading efficiency weights corresponding to the reading efficiency from a preset configuration table, wherein the decision factors comprise the partition types, the thread numbers, the memory information and the reading efficiency;
and calculating a weight product among the partition weight, the thread weight, the memory weight and the reading efficiency weight, and taking the weight product as a machine weight of the traversed machine.
3. The batch scheduling method of claim 1 wherein said step of calculating a sliced data volume for each of said partitions based on said total data volume and a machine weight for each of said machines comprises:
determining the machine weight ratio of each partition according to the machine weight of each machine and the partition corresponding to each machine;
traversing each machine weight ratio, calculating a first product of the total data volume and the traversed machine weight ratio, taking the first product as total partition data of a partition corresponding to the traversed machine weight ratio, and determining the fragment data volume of each partition based on the total partition data.
4. The batch scheduling method of claim 3 wherein the step of determining the machine weight ratio of each partition based on the machine weight of each machine and the partition corresponding to each machine comprises:
calculating partition machine weight of each partition based on the machine weight of each machine and the partition corresponding to each machine;
calculating the sum of the partition machine weights, taking the sum as a total machine weight, traversing the partition machine weights, calculating a first proportion value of the traversed partition machine weight and the total machine weight, and taking the first proportion value as the machine weight ratio of the partition corresponding to the traversed partition machine weight.
5. The batch scheduling method of claim 3 wherein said step of determining the amount of sharded data for each of said partitions based on the total data for each of said partitions comprises:
traversing each partition, determining the total machine number of all machines in the traversed partition and the thread number of each machine, calculating a second product of the total machine number and the thread number of the machine, and taking the second product as the total fragment number;
and determining total traversal partition data corresponding to the traversed partitions based on the total partition data, calculating a second proportion value of the total traversal partition data and the total number of the partitions, and taking the second proportion value as the data volume of the traversed partitions.
6. The batch scheduling method of claim 1, wherein said step of scheduling the batch according to each of said fragmented data volumes comprises:
calculating a third product of the total fragment quantity of the standby partitions in each partition and a preset fragment executable number;
determining total partition data of spare partitions in each partition, and if the data quantity of the total partition data of the spare partitions is smaller than or equal to the third product, performing batch scheduling according to machines in the spare partitions; or the like, or, alternatively,
and if the data volume of the total partition data of the backup partition is larger than the third product, executing a step of performing batch scheduling according to the data volume of each partition.
7. The batch scheduling method of claim 1, wherein said step of scheduling the batch according to each of said fragmented data volumes is followed by:
if the partition of the running machine is a main partition, registering and monitoring batching services for the running machine, and detecting whether the partition of the running machine is converted from the main partition into a standby partition or not based on a preset time interval;
and if the partition of the running machine is converted from the main partition into the standby partition, canceling the monitoring batching service of the partition of the running machine, executing the steps of obtaining the decision factors of the machines and calculating the machine weight of the machines according to a preset configuration table and the decision factors of the machines.
8. A batch scheduling apparatus, comprising:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a target application in all applications to be run in parallel and performing primary and standby partition processing on all machines associated with the target application to determine partitions corresponding to all the machines;
the acquisition module is used for acquiring the decision factor of each machine and calculating the machine weight of each machine according to a preset configuration table and the decision factor of each machine;
and the operation module is used for acquiring the total data volume to be operated by the target application, calculating the fragment data volume of each partition according to the total data volume and the machine weight of each machine, and performing batch scheduling according to each fragment data volume.
9. A batch scheduling apparatus, characterized in that the batch scheduling apparatus comprises: memory, a processor, and a batch scheduler stored on the memory and executable on the processor, the batch scheduler when executed by the processor implementing the steps of the batch scheduling method of any of claims 1 to 7.
10. A computer storage medium having a batch scheduler stored thereon that, when executed by a processor, performs the steps of the batch scheduling method of any of claims 1 to 7.
CN202011344404.4A 2020-11-25 2020-11-25 Batch scheduling method, device, equipment and computer storage medium Pending CN112433838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011344404.4A CN112433838A (en) 2020-11-25 2020-11-25 Batch scheduling method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011344404.4A CN112433838A (en) 2020-11-25 2020-11-25 Batch scheduling method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN112433838A true CN112433838A (en) 2021-03-02

Family

ID=74698332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011344404.4A Pending CN112433838A (en) 2020-11-25 2020-11-25 Batch scheduling method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112433838A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807710A (en) * 2021-09-22 2021-12-17 四川新网银行股份有限公司 Method for sectionally paralleling and dynamically scheduling system batch tasks and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807710A (en) * 2021-09-22 2021-12-17 四川新网银行股份有限公司 Method for sectionally paralleling and dynamically scheduling system batch tasks and storage medium
CN113807710B (en) * 2021-09-22 2023-06-20 四川新网银行股份有限公司 System batch task segmentation parallel and dynamic scheduling method and storage medium

Similar Documents

Publication Publication Date Title
CN112162865B (en) Scheduling method and device of server and server
CN112199194B (en) Resource scheduling method, device, equipment and storage medium based on container cluster
CN109857518B (en) Method and equipment for distributing network resources
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
WO2019019400A1 (en) Task distributed processing method, device, storage medium and server
CN107426274B (en) Method and system for service application and monitoring, analyzing and scheduling based on time sequence
CN111752965B (en) Real-time database data interaction method and system based on micro-service
CN108920153B (en) Docker container dynamic scheduling method based on load prediction
CN117370029A (en) Cluster resource management in a distributed computing system
CN111338791A (en) Method, device and equipment for scheduling cluster queue resources and storage medium
CN108268546B (en) Method and device for optimizing database
CN103810045A (en) Resource allocation method, resource manager, resource server and system
CN108121599A (en) A kind of method for managing resource, apparatus and system
CN112753022A (en) Automatic query retry in a database environment
CN111190691A (en) Automatic migration method, system, device and storage medium suitable for virtual machine
CN111190753A (en) Distributed task processing method and device, storage medium and computer equipment
CN112115160B (en) Query request scheduling method and device and computer system
CN111464331A (en) Control method and system for thread creation and terminal equipment
CN112433838A (en) Batch scheduling method, device, equipment and computer storage medium
CN113157411B (en) Celery-based reliable configurable task system and device
CN111338778B (en) Task scheduling method and device, storage medium and computer equipment
CN114116173A (en) Method, device and system for dynamically adjusting task allocation
CN111913784B (en) Task scheduling method and device, network element and storage medium
CN113360481B (en) Data processing method, device, equipment and computer readable storage medium
CN115858499A (en) Database partition processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination