CN110008018B - Batch task processing method, device and equipment - Google Patents

Batch task processing method, device and equipment Download PDF

Info

Publication number
CN110008018B
CN110008018B CN201910043280.7A CN201910043280A CN110008018B CN 110008018 B CN110008018 B CN 110008018B CN 201910043280 A CN201910043280 A CN 201910043280A CN 110008018 B CN110008018 B CN 110008018B
Authority
CN
China
Prior art keywords
task
processing
server cluster
service data
batch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910043280.7A
Other languages
Chinese (zh)
Other versions
CN110008018A (en
Inventor
周安林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910043280.7A priority Critical patent/CN110008018B/en
Priority to CN202311051119.7A priority patent/CN117076453A/en
Publication of CN110008018A publication Critical patent/CN110008018A/en
Application granted granted Critical
Publication of CN110008018B publication Critical patent/CN110008018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/283Multi-dimensional databases or data warehouses, e.g. MOLAP or ROLAP
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the specification discloses a batch task processing method, a batch task processing device and batch task processing equipment. The scheme comprises the following steps: the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources; the second server cluster receives the processing tasks distributed by the first server cluster; according to the processing task, the business data corresponding to the processing task is fished from a database; distributing the service data to a third server cluster in the cluster resources; the third server cluster receives the service data distributed by the second server cluster; and processing the service data.

Description

Batch task processing method, device and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for processing batch tasks.
Background
In some large-scale data calculation scenarios, the number of databases to be calculated is usually very large, the time consumed by the organization in interest collection and payment days is longer and longer, and the user cannot receive interest amount or interest notification to be paid in time, so that other day-end tasks depending on the interest tasks are seriously delayed; many interest tasks in the industry run on one computer, so as to increase the data volume, the computing performance cannot meet the service requirement. For example: the banking industry has huge customer groups, and the calculation and payment of interest are required based on the dimension of the customers, and one deposit or loan of a bank is required to be paid and paid for the interest of the customers on a specific date; the processing needs to be carried out in the daily final batch processing task time period of a bank or a related financial institution, but the calculation of interest often needs to take a long time, if the processing time is too long, the output of the daily final batch processing task can be influenced, the operation of enterprises and the collection and distribution of user interest are influenced, and the user experience is influenced.
In order to overcome the problems, a large data platform is introduced into many enterprises at present, online data are synchronized to an offline large data platform, the online data are imported to a data warehouse of the offline data platform through a data access tool by utilizing the offline large data platform, specific offline processing logic is written for the offline large data platform to complete interest calculation based on the dimension of a client, and finally the data are exported to an online data table for data processing.
However, in the prior art, the enterprise needs to build a data platform, so that the overall cost is high; and the time consumption for synchronizing the data to the offline environment is long, the stability of the offline environment is poor, and the task processing efficiency is low.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method, an apparatus, and a device for processing batch tasks, which are used for improving task processing efficiency without increasing cost.
In order to solve the above technical problems, the embodiments of the present specification are implemented as follows:
the batch task splitting method provided by the embodiment of the specification comprises the following steps:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
and distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources.
The batch task data fishing method provided by the embodiment of the specification comprises the following steps:
the second server cluster receives the processing tasks distributed by the first server cluster; the distributing of the processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
According to the processing task, the business data corresponding to the processing task is fished from a database;
and distributing the service data to a third server cluster in the cluster resources.
The batch task processing method provided by the embodiment of the specification comprises the following steps:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
the second server cluster receives processing tasks distributed by the first server cluster;
according to the processing task, the business data corresponding to the processing task is fished from a database;
distributing the service data to a third server cluster in the cluster resources;
the third server cluster receives the service data distributed by the second server cluster; and processing the service data.
The embodiment of the specification provides a batch task splitting device, which comprises:
The trigger instruction acquisition module is used for acquiring a batch task processing request trigger instruction by the first server cluster;
the task splitting module is used for responding to the batch task processing request triggering instruction, splitting the first processing task corresponding to the batch task processing request according to a splitting rule, and obtaining a second processing task set;
and the first distribution module is used for distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources.
The embodiment of the specification provides a batch task data drags for device, includes:
the processing task receiving module is used for receiving processing tasks distributed by the first server cluster by the second server cluster; the distributing of the processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
The service data acquisition module is used for acquiring service data corresponding to the processing task from the database according to the processing task;
and the second distributing module is used for distributing the service data to a third server cluster in the cluster resources.
The batch task processing device provided in the embodiment of the present specification includes:
the trigger instruction acquisition module is used for acquiring a batch task processing request trigger instruction by the first server cluster;
the task splitting module is used for responding to the batch task processing request triggering instruction, splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
the first distribution module is used for distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
the processing task receiving module is used for receiving the processing task distributed by the first server cluster by the second server cluster;
the service data acquisition module is used for acquiring service data corresponding to the processing task from the database according to the processing task;
the second distributing module is used for distributing the service data to a third server cluster in the cluster resources;
The task processing module is used for receiving the service data distributed by the second server cluster by the third server cluster; and processing the service data.
The embodiment of the specification provides batch task splitting equipment, which comprises the following components:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
and distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources.
The embodiment of the specification provides a batch task data drags for equipment, includes:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to:
the second server cluster receives the processing tasks distributed by the first server cluster; the distributing of the processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
according to the processing task, the business data corresponding to the processing task is fished from a database;
and distributing the service data to a third server cluster in the cluster resources.
The embodiment of the specification provides batch task processing equipment, which comprises the following components:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
The method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
the second server cluster receives processing tasks distributed by the first server cluster;
according to the processing task, the business data corresponding to the processing task is fished from a database;
distributing the service data to a third server cluster in the cluster resources;
the third server cluster receives the service data distributed by the second server cluster; and processing the service data.
The above-mentioned at least one technical scheme that this description embodiment adopted can reach following beneficial effect: the batch tasks are split according to the splitting rule, the split tasks are distributed to the corresponding server clusters, the server clusters are responsible for picking up corresponding service data from the database, the picked-up service data are reasonably distributed to the corresponding server clusters in the cluster resources for data processing, capacity expansion and capacity reduction of the clusters can be automatically adapted, the capacity expansion can be dynamically carried out along with the increase of the data quantity, and the task processing efficiency can be improved on the basis of not increasing the cost.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic flow chart of a batch task processing method according to embodiment 1 of the present disclosure;
fig. 2 is a flow chart of a batch task splitting method provided in embodiment 2 of the present disclosure;
fig. 3 is a flow chart of a batch task data retrieving method provided in embodiment 3 of the present disclosure;
FIG. 4 is a schematic view of a batch task processing device corresponding to FIG. 1 according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a batch task splitting device corresponding to FIG. 2 according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a batch job data retrieval device corresponding to FIG. 3 according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of a batch task processing system according to an embodiment of the present disclosure;
fig. 8 is a schematic structural view of a batch task processing device corresponding to fig. 1 to 3 according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
In the prior art, online data are synchronized to an offline big data platform, the online data are imported to a data warehouse of the offline data platform by using the offline big data platform through a data access tool, specific offline processing logic is written for the offline big data platform to finish interest calculation based on client dimensions, and finally the data are exported to an online data table for data processing, so that an enterprise needs to build the data platform, and the overall cost is high; and the time consumption for synchronizing the data to the offline environment is long, the stability of the offline environment is poor, and the task processing efficiency is low.
Example 1
Fig. 1 is a flow chart of a batch task processing method provided in embodiment 1 of the present disclosure. From the program perspective, the execution subject of the flow may be a program or an application client that is installed on an application server.
As shown in fig. 1, the process may include the steps of:
s101: the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
the number of the batch tasks is larger than a preset threshold, wherein the preset threshold means that task splitting is not needed within the range of the preset threshold. However, if the preset threshold is exceeded, that is, when a large amount of data needs to be processed, if online transactions are adopted, a timeout phenomenon is usually generated, and the core service is also stressed. The bulk business may include a line of periodic bulk tax deductions, bulk stop payments, or periodic payroll remittance, etc.
A clustered communication system is a computer system that performs computing tasks in a highly tight, coordinated manner by a set of loosely integrated computer software and/or hardware connections. In a sense they can be regarded as a computer. Individual computers in a clustered system are often referred to as nodes, typically connected by a local area network, but there are other possible ways of connection. A cluster may run one or more distributed systems or may not run a distributed system at all, and a cluster may include multiple servers.
S102: responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set.
The first server can be used for splitting the batch tasks, and splitting the batch services according to the splitting rule when the specific splitting is carried out, wherein the first processing task in the step represents the batch services corresponding to the batch service request; the processing tasks included in the second processing task set are split processing tasks.
S103: and distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources.
The cluster resources may be measured in different units, for example, the number of machines and the reverse measurement may be adopted, and any machine in the cluster may be considered as a unit of cluster resources, and for the divided cluster resource set, the cluster resource set may include a set number of machines. In the above step, the cluster resource includes all clusters in the system, and one cluster may include multiple servers. The configuration center distributes the split processing tasks to the server clusters in the cluster resources by configuring the first server clusters, and distributes the split processing tasks according to the split tasks when distributing the processing tasks, if the processing tasks are split according to a certain identifier, one server cluster can only distribute the processing tasks corresponding to the identifier.
When the split tasks are distributed, the running states of all clusters can be obtained through the configuration center, an application of a unified configuration center exists in the distributed clusters, the distributed clusters are communication components based on a release-subscription model, the service publishers and the service subscribers are connected through the configuration center, and the service providers in the clusters are found through the configuration center. The configuration center plays a role of flow averaging, and the call flow of the whole cluster is averaged, so that the running state of each cluster can be known from the configuration center.
And according to the running state and the load capacity of each cluster, the configuration center distributes tasks to the corresponding server clusters, and averages the call flow of the whole cluster.
S104: the second server cluster receives processing tasks distributed by the first server cluster.
S105: and fishing out the service data corresponding to the processing task from the database according to the processing task.
And each server cluster drags out the corresponding data from the database according to the received processing task, and all task data corresponding to the batch tasks are stored in the database. Each cluster is responsible for fishing out corresponding service data. For example: and if the server cluster A is distributed to the processing task Y, the server cluster is responsible for capturing all the service data corresponding to the processing task Y, wherein the service data comprises the user account corresponding to the processing task Y and the transaction data corresponding to the user account.
S106: distributing the service data to a third server cluster in the cluster resources;
s107: the third server cluster receives the service data distributed by the second server cluster; and processing the service data.
And each cluster in the third server clusters processes the business data corresponding to the received processing task. For example: after receiving service data (user account number), an execution component (exector) of the third server cluster can acquire specific user data to be processed; according to the amount of deposit of the user when the day is finished, calculating according to a specified ratio and a specified base number, calculating the amount of deposit required to be paid by interest, carrying out accounting, and updating the current period deposit of the user deposit task control list. The user setting-up task control list is generated when the user deposits, for example, the user is split into 100 parts, and when each user generates deposit accounts, a random number task ID of 1-100 is randomly generated; a task distribution (split) component distributes 100 tasks into a server cluster of an application; different server clusters receive specific task IDs; the user opens deposit account and generates a data table of a counting task control list, the control list comprises information of settlement information, the information comprises randomly generated task IDs, and task splitting is carried out in a specific period (the end of each day, the end of each month or the end of each quarter month, etc.) based on the identification of the task IDs.
The method described in the above steps S101 to S107 may be described by the following specific examples:
for example: the bank needs to perform the settlement operation at the end of the month, specifically, needs to calculate the monthly interest corresponding to the amount of each user, and distributes and collects the interest. Assuming that when each user generates a deposit account, a random number task ID of 1-100 is randomly generated, and when each user opens the deposit account, a data table of a calculation and extraction task control list is generated, wherein the control list contains information of information, the randomly generated task ID is included, and the task is split according to a certain dimension through a server (which can be considered as a first server cluster) in the cluster, for example: splitting the batch task according to the identifiers of 1-9, obtaining 300 user accounts corresponding to the identifier 1, 100 user accounts corresponding to the identifier 2, 400 user accounts corresponding to the identifier 3, … …, 200 user accounts corresponding to the identifier 9, and distributing the user accounts to a second server cluster (comprising server clusters B-K which are all available) according to the identification information, wherein the user accounts corresponding to the identifiers 1-9 can be distributed to the second server cluster, for example: server cluster B corresponds to 300 user accounts identified as 1, server cluster C corresponds to 100 user accounts identified as 2, server cluster D corresponds to 400 user accounts identified as 3, … …, and server cluster K corresponds to 200 user accounts identified as 9 (this allocation need not be in order, only one identifier need be satisfied for one server cluster). Next, server cluster B drags out account service data corresponding to 300 user accounts with a corresponding identifier of 1 from the database, server cluster C drags out account service data corresponding to 100 user accounts with a corresponding identifier of 2 from the database, server cluster D drags out account service data corresponding to 400 user accounts with a corresponding identifier of 3 from the database, … …, and server cluster K drags out account service data corresponding to 200 user accounts with a corresponding identifier of 9 from the database. After the service data is fished, the configuration center configures the second server cluster, and the fished user service data is distributed to the third server cluster, when the fished user service data is distributed, the service data corresponding to one identifier can be distributed to one third server cluster for processing, and the service data corresponding to one identifier can be distributed to a plurality of third server clusters for processing, for example: service data corresponding to 300 user accounts identified as 3 may be assigned to the cluster L, the cluster M, and the cluster N in the third server cluster for processing.
The first server cluster, the second server cluster and the third server cluster described in the method steps of fig. 1 may be a set of servers in the same cluster in a physical sense. In this scheme, the server sets in the same cluster are named as a first server cluster, a second server cluster and a third server cluster only according to different processing tasks, and the first server cluster, the second server cluster and the third server cluster are not limited to necessarily belong to three different clusters. In a specific application, the first server cluster may be a set of servers (which may be one or more) that perform task splitting operations; the second server cluster may be a set of servers performing traffic data retrieval operations and the third server cluster may be a set of servers performing traffic data processing operations. For example: when a batch task processing request triggering instruction is received, one or more servers in the cluster can be adopted to carry out splitting operation, and the servers X1 and X2 are adopted to carry out batch task splitting, so that the servers X1 and X2 form a first server cluster; after splitting the batch tasks, allocating the batch tasks to other servers in the cluster for scooping, and allocating the allocated processing tasks to the servers X3-X20 in the cluster, wherein the servers X3-X20 perform data scooping, and the servers X3-X20 form a second server cluster; after the service data is fished, the loading capacity of each server in the cluster is obtained from the configuration center, the service data is distributed to the servers in the cluster for processing according to flow balance, the service data is distributed to the servers X17-X28 for processing, and at the moment, the servers X17-X28 form a third server cluster; it can be seen that the first server cluster (servers X1-X2), the second server cluster (servers X3-X20), and the third server cluster (servers X17-X28) belong to cluster Y at this time, wherein servers X17-X20 belong to both the second server cluster and the third server cluster because servers X17-X20 have processed different tasks at different occasions. Although in the example only the servers in the second server cluster and the third server cluster have coincided, it is not excluded that in some cases there will be the same servers in the first server cluster, the second server cluster and the third server cluster.
According to the method in FIG. 1, batch tasks are split according to the splitting rule, the split tasks are distributed to corresponding server clusters, the server clusters are responsible for picking corresponding service data from a database, the picked service data are reasonably distributed to the corresponding server clusters in cluster resources for data processing, capacity expansion and capacity reduction of the clusters can be automatically adapted, capacity expansion can be dynamically carried out along with the increase of data quantity, and task processing efficiency can be improved on the basis of not increasing cost.
Based on the method of fig. 1, the present specification also provides other embodiments of the method, which are described below.
Example 2
Fig. 2 is a flow chart of a batch task splitting method provided in embodiment 2 of the present disclosure.
As shown in fig. 2, a batch task splitting method may include the following steps:
s201: the first server cluster acquires a batch task processing request trigger instruction.
S202: responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set.
S203: and distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources.
In a specific application, splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set may specifically include:
acquiring a task identifier corresponding to the first processing task;
splitting the first processing task based on the task identifier to obtain a second processing task set; any one of the second processing tasks in the second processing task set corresponds to one identifier.
The distributing the processing task in the second processing task set to the second server cluster in the cluster resource may specifically include:
acquiring running state information of a server cluster in the cluster resource set;
determining a second server cluster according to the running state information; the second server cluster is an available server cluster and is used for capturing service data corresponding to the processing task;
and distributing the processing tasks in the second processing task set to the second server clusters according to the task identifiers, and distributing all the processing tasks corresponding to one task identifier to one server cluster in the second server clusters.
The first server cluster obtains a batch task processing request trigger instruction, which specifically may include:
and the first server cluster acquires a batch task processing request trigger instruction in a set period.
For example: after a first server cluster obtains a trigger instruction of a batch task processing request, responding to the trigger instruction, and needing to split a batch task X, wherein the batch task is a wage settlement task of a company, the settlement time is 15 numbers of each month, task ID identifications corresponding to the batch task are obtained, the task ID identifications are identification information corresponding to employee accounts of each company, and the batch task is split based on the task ID identificationsDividing to obtain a second processing task set as { task X ] 1 Task X 2 Task X 3 Task X 4 Task X 5 }, wherein task X 1 -X 4 Respectively corresponding to 10 staff, task X 5 Corresponding to 9 staff, acquiring running state information of a server cluster in a cluster resource set from a configuration center to obtain an available server cluster; task X 1 -X 5 Respectively to the available server clusters, such as: the allocation situation may be: task X 1 Assigned to server cluster A, task X 2 Assigned to server cluster B, task X 3 Assigned to server cluster C, task X 4 Assigned to server cluster D, task X 5 Assigned to the server cluster E.
In the above embodiment 1, the batch tasks are split in a specific time according to a certain splitting rule, so that the batch services are guaranteed to be split rapidly in the specific time, and the split service data are distributed to the available server clusters, so that the splitting time can be reduced, and the splitting efficiency can be improved.
Example 3
Fig. 3 is a flow chart of a batch task data retrieving method provided in embodiment 3 of the present disclosure.
As shown in fig. 3, a batch task data retrieving method may include the following steps:
s301: the second server cluster receives the processing tasks distributed by the first server cluster; the distributing of the processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; and distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources.
S302: and fishing out the service data corresponding to the processing task from the database according to the processing task.
S303: and distributing the service data to a third server cluster in the cluster resources.
In practical application, the step of capturing the service data corresponding to the processing task from the database according to the processing task specifically includes:
identifying a task identifier carried in the processing task;
and fishing out corresponding service data from the database according to the task identifier.
The step of retrieving corresponding service data from a database according to the task identifier may specifically include:
determining the quantity of the service data fetched each time according to the processing load capacity of the second server cluster; the number of the service data fetched each time does not exceed the processing load capacity of the second server cluster;
and sequentially fishing the service data from the database according to the quantity of the service data fished each time until all the service data corresponding to the same service identifier are fished.
The distributing the service data to the third server cluster in the cluster resource may specifically include:
acquiring the load capacity of a third server cluster in the cluster resources;
determining the loadable service data volume corresponding to each third server cluster according to the load capacity;
And distributing the service data to each third server cluster according to the loadable service data volume.
Continuing with the example mentioned in example 2, the allocation may be: task X 1 Assigned to server cluster A, task X 2 Assigned to server cluster B, task X 3 Assigned to server cluster C, task X 4 Assigned to server cluster D, task X 5 Assigned to the server cluster E. After the corresponding task and the task identifier are acquired, the following steps are to be carried out, namely, the service data to be processed are fished from the database: the server cluster A fetches the task X from the database 1 All of the traffic data corresponding to the traffic data,the server cluster B fetches the task X from the database 2 The server cluster C drags the task X from the database corresponding to all the service data 3 Corresponding all business data, the server cluster D drags for the task X from the database 4 The server cluster E drags the task X from the database corresponding to all the service data 5 And all corresponding service data. When the specific data is fished, the corresponding business data can be fished out at one time, and batch data can be fished out, for example: the server cluster A fetches the task X from the database 1 When corresponding all business data, assume task X 1 1000 corresponding company staff, at this time, according to the load capacity of the available server in the cluster resource, determining the number of service data to be fetched each time as 100; then task X 1 All corresponding service data can be sequentially fished for 10 times.
After the service data is fished, the fished service data is uniformly distributed to the third server cluster according to the load capacity, particularly when the service data is distributed, the load saturation can be achieved after some server clusters receive one or two service data due to the load capacity of the server clusters and the running state of each server cluster, and some server clusters can receive a plurality of service data at one time, so that the service data volume which can be processed by each cluster can be different, all the fished service data cannot be distributed one by one according to the identification at the moment, the quantity of available clusters is determined by the uniform distribution, and the service data is reasonably distributed by a configuration center according to the load capacity and the running state of each cluster, for example: the server clusters currently available are: the cluster F-N, the partial allocation case may be: x is X 1 All corresponding service data are distributed to the server cluster F and the server cluster G and X 2 All corresponding service data are distributed to the server cluster H and the server clusters G and X 3 Distributing all corresponding service data to a server cluster I; in these three allocation cases, it can be seen that the load capacity of the server cluster F is insufficient to handle all X' s 1 All corresponding service data, at this time X 1 Corresponding industriesThe service data can be distributed to the server cluster F and the server cluster G (the same is true of the server cluster H), and the server I can process the X once 3 And all corresponding service data.
In the above embodiment 3, during the capturing, the targeted capturing is directly performed from the database according to the service identifier information, and one server cluster captures all the service data corresponding to one service identifier; the fishing rod can be fished for one time according to actual needs, and can also be fished for multiple times in a circulating way. The method can be used for targeted quick retrieval of the service data, and has high retrieval efficiency.
The method in fig. 1-3 distributes tasks to all servers in the distributed cluster, solves the efficiency problem of periodic user interest operation of the mechanism by utilizing the capacity of the distributed cluster, and can also greatly save the cost.
Based on the same thought, the embodiment of the specification also provides a device corresponding to the method. Fig. 4 is a schematic structural diagram of a batch task processing device corresponding to fig. 1 according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include:
the trigger instruction acquisition module 401 acquires a batch task processing request trigger instruction from the first server cluster;
the task splitting module 402 is used for responding to the batch task processing request triggering instruction and splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
a first distributing module 403, configured to distribute the processing tasks in the second processing task set to a second server cluster in the cluster resource;
a processing task receiving module 404, configured to receive, by the second server cluster, a processing task distributed by the first server cluster;
the service data capturing module 405 is configured to capture service data corresponding to the processing task from a database according to the processing task;
a second distributing module 406, configured to distribute the service data to a third server cluster in the cluster resource;
a task processing module 407, configured to receive, by the third server cluster, service data distributed by the second server cluster; and processing the service data.
Fig. 5 is a schematic structural diagram of a batch task splitting device corresponding to fig. 2 according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include:
the trigger instruction obtaining module 501 is configured to obtain a trigger instruction of a batch task processing request from a first server cluster;
the task splitting module 502 is configured to split, according to a splitting rule, a first processing task corresponding to the batch task processing request to obtain a second processing task set in response to the batch task processing request trigger instruction;
a first distributing module 503, configured to distribute the processing tasks in the second processing task set to a second server cluster in the cluster resource.
The task splitting module specifically comprises:
the task identifier acquisition unit is used for acquiring a task identifier corresponding to the first processing task;
the task splitting unit is used for splitting the first processing task based on the task identification to obtain a second processing task set; any one of the second processing tasks in the second processing task set corresponds to one identifier.
The first distribution module may specifically include:
the running state information acquisition unit is used for acquiring the running state information of the server cluster in the cluster resource set;
The second server cluster determining unit is used for determining a second server cluster according to the running state information; the second server cluster is an available server cluster and is used for capturing service data corresponding to the processing task;
and the first task distribution unit is used for distributing the processing tasks in the second processing task set to the second server clusters according to the task identifiers, and distributing all the processing tasks corresponding to one task identifier to one server cluster in the second server clusters.
The trigger instruction obtaining module may specifically be configured to:
the first server cluster acquires a batch task processing request trigger instruction in a set period
The service data capturing module may specifically include:
the task identification unit is used for identifying task identifications carried in the processing tasks;
and the service data acquisition unit is used for acquiring corresponding service data from the database according to the task identifier.
The service data capturing unit may be specifically configured to:
determining the quantity of the service data fetched each time according to the processing load capacity of the second server cluster; the number of the service data fetched each time does not exceed the processing load capacity of the second server cluster;
And sequentially fishing the service data from the database according to the quantity of the service data fished each time until all the service data corresponding to the same service identifier are fished.
The second distributing module may specifically include:
the load capacity acquisition unit is used for acquiring the load capacity of a third server cluster in the cluster resources;
a loadable service data amount determining unit, configured to determine, according to the load capability, a loadable service data amount corresponding to each third server cluster;
and the second task distribution unit is used for distributing the service data to each third server cluster according to the loadable service data volume.
Fig. 6 is a schematic structural diagram of a batch task data retrieving device corresponding to fig. 3 according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include:
the processing task receiving module 601 is configured to receive a processing task distributed by the first server cluster by using the second server cluster; the distributing of the processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
The service data capturing module 602 is configured to capture service data corresponding to the processing task from a database according to the processing task;
and a second distributing module 603, configured to distribute the service data to a third server cluster in the cluster resources.
Based on the same thought, the embodiment of the specification also provides equipment corresponding to the method.
FIG. 7 is a block diagram of a batch task processing system according to an embodiment of the present disclosure.
As shown in fig. 7, a batch processing system includes a framework that uses three layers of task distribution, namely, a first server cluster 701 (also called as a split server cluster), a second server cluster 702 (also called as a scoop server cluster), and a third server cluster 703 (also called as a process server cluster), where the first server cluster in the first layer includes a split component, and is used to split a batch task, the second server cluster in the second layer includes a loader (download or scoop) component, and is used to scoop service data corresponding to the batch task, and the third server cluster in the third layer includes an extrator (execute) component, and is used to process the service data, for example, a specific process of the three layers of distribution structure may be: the split component distributes 100 tasks to a second server cluster; different server clusters receive specific task identifications; after the task is split into different servers in the cluster, a loader component of the task distribution framework receives a command of a task identifier; the user account corresponding to the task identifier is fished, and the user account is distributed to a receiving component of an extrator of the three-layer distribution frame in the next stage; the exector component of the task distribution framework receives the distributed tasks and can acquire specific user data to be processed; according to the amount of deposit of the user when the day is finished, calculating according to a specified ratio and a specified base, calculating the amount of interest payment, and performing accounting.
Fig. 8 is a schematic structural view of a batch task processing device corresponding to fig. 1 to 3 according to an embodiment of the present disclosure. As shown in fig. 8, the device 800 may include:
at least one processor 810; the method comprises the steps of,
a memory 830 communicatively coupled to the at least one processor; wherein,
the memory 830 stores instructions 820 executable by the at least one processor 810, the instructions being executable by the at least one processor 810.
Corresponding to fig. 1, the instructions may enable the at least one processor 810 to:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
the second server cluster receives the processing tasks distributed by the first server cluster;
according to the processing task, the business data corresponding to the processing task is fished from a database;
distributing the service data to a third server cluster in the cluster resources;
The third server cluster receives the service data distributed by the second server cluster; and processing the service data.
Corresponding to fig. 2, the instructions may enable the at least one processor 810 to:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
and distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources.
Corresponding to fig. 3, the instructions may enable the at least one processor 810 to:
the second server cluster receives the processing tasks distributed by the first server cluster; the distributing of the processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the processing tasks in the second processing task set to a second server cluster in the cluster resources;
According to the processing task, the business data corresponding to the processing task is fished from a database;
and distributing the service data to a third server cluster in the cluster resources.
Based on the same thought, the embodiment of the specification also provides a system corresponding to the method. The system may include:
in the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., a field programmable gate array (Field Programmable gate array, FPGA)) is an integrated circuit whose logic function is determined by the user programming the device. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmelAT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (21)

1. A batch task splitting method comprising:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
distributing the second processing task in the second processing task set to a second server cluster in the cluster resource; the second server cluster is used for capturing service data corresponding to the second processing task from a database according to the task identifier of the second processing task, and distributing the service data to a third server cluster in the cluster resource for processing;
Splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
2. The method of claim 1, wherein the splitting the first processing task corresponding to the batch task processing request according to the splitting rule to obtain the second processing task set specifically includes:
acquiring the task identifier corresponding to the first processing task;
splitting the first processing task based on the task identifier to obtain a second processing task set; any one of the second processing tasks in the second processing task set corresponds to one of the task identifiers.
3. The method according to claim 2, wherein the distributing the second processing task of the second set of processing tasks to the second server cluster of the cluster resources comprises:
acquiring running state information of a server cluster in a cluster resource set;
determining a second server cluster according to the running state information; the second server cluster is an available server cluster and is used for capturing service data corresponding to the second processing task;
And distributing the second processing tasks in the second processing task set to the second server clusters according to the task identifiers, and distributing the second processing tasks corresponding to one task identifier to one server cluster in the second server clusters.
4. The method of claim 1, wherein the first server cluster obtains a batch task processing request trigger instruction, specifically comprising:
and the first server cluster acquires a batch task processing request trigger instruction in a set period.
5. A batch task data fishing method comprises the following steps:
the second server cluster receives a second processing task distributed by the first server cluster; the distributing of the second processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the second processing task in the second processing task set to a second server cluster in the cluster resource;
according to the task identification of the second processing task, the business data corresponding to the second processing task is fished from a database;
Distributing the service data to a third server cluster in the cluster resources;
splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
6. The method of claim 5, wherein the step of retrieving, from the database, the service data corresponding to the second processing task according to the second processing task specifically includes:
identifying the task identifier carried in the second processing task;
and fishing out corresponding service data from the database according to the task identifier.
7. The method of claim 6, wherein the step of retrieving the corresponding service data from the database according to the task identifier specifically comprises:
determining the quantity of the service data fetched each time according to the processing load capacity of the second server cluster; the number of the service data fetched each time does not exceed the processing load capacity of the second server cluster;
and sequentially fishing the service data from the database according to the quantity of the service data fished each time until all the service data corresponding to the same service identifier are fished.
8. The method according to claim 5, wherein said distributing said traffic data to a third server cluster in said cluster resources, comprises:
acquiring the load capacity of a third server cluster in the cluster resources;
determining the loadable service data volume corresponding to each third server cluster according to the load capacity;
and distributing the service data to each third server cluster according to the loadable service data volume.
9. A batch task processing method comprising:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
distributing the second processing task in the second processing task set to a second server cluster in the cluster resource;
the second server cluster receives a second processing task distributed by the first server cluster;
according to the task identification of the second processing task, the business data corresponding to the second processing task is fished from a database;
distributing the service data to a third server cluster in the cluster resources;
The third server cluster receives the service data distributed by the second server cluster and processes the service data;
splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
10. A batch task splitting apparatus comprising:
the trigger instruction acquisition module is used for acquiring a batch task processing request trigger instruction by the first server cluster;
the task splitting module is used for responding to the batch task processing request triggering instruction, splitting the first processing task corresponding to the batch task processing request according to a splitting rule, and obtaining a second processing task set;
the first distribution module is used for distributing the second processing tasks in the second processing task set to a second server cluster in the cluster resources; the second server cluster is used for capturing service data corresponding to the second processing task from a database according to the task identifier of the second processing task, and distributing the service data to a third server cluster in the cluster resource for processing;
Splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
11. The apparatus of claim 10, the task splitting module specifically comprises:
a task identifier obtaining unit, configured to obtain the task identifier corresponding to the first processing task;
the task splitting unit is used for splitting the first processing task based on the task identification to obtain a second processing task set; any one of the second processing tasks in the second processing task set corresponds to one of the task identifiers.
12. The apparatus of claim 11, the first distribution module, in particular comprising:
the running state information acquisition unit is used for acquiring the running state information of the server cluster in the cluster resource set;
the second server cluster determining unit is used for determining a second server cluster according to the running state information; the second server cluster is an available server cluster and is used for capturing service data corresponding to the second processing task;
And the first task distribution unit is used for distributing the second processing tasks in the second processing task set to the second server clusters according to the task identifiers, and distributing the second processing tasks corresponding to one task identifier to one server cluster in the second server clusters.
13. The apparatus of claim 10, wherein the trigger instruction acquisition module is specifically configured to:
and the first server cluster acquires a batch task processing request trigger instruction in a set period.
14. A batch task data retrieval device comprising:
the processing task receiving module is used for receiving a second processing task distributed by the first server cluster by the second server cluster; the distributing of the second processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the second processing task in the second processing task set to a second server cluster in the cluster resource;
the service data acquisition module is used for acquiring the service data corresponding to the second processing task from the database according to the task identification of the second processing task;
The second distributing module is used for distributing the service data to a third server cluster in the cluster resources;
splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
15. The apparatus of claim 14, wherein the service data retrieving module specifically comprises:
the task identification unit is used for identifying the task identification carried in the second processing task;
and the service data acquisition unit is used for acquiring corresponding service data from the database according to the task identifier.
16. The apparatus of claim 15, wherein the service data retrieving unit is specifically configured to:
determining the quantity of the service data fetched each time according to the processing load capacity of the second server cluster; the number of the service data fetched each time does not exceed the processing load capacity of the second server cluster;
and sequentially fishing the service data from the database according to the quantity of the service data fished each time until all the service data corresponding to the same service identifier are fished.
17. The apparatus of claim 14, the second distribution module specifically comprising:
the load capacity acquisition unit is used for acquiring the load capacity of a third server cluster in the cluster resources;
a loadable service data amount determining unit, configured to determine, according to the load capability, a loadable service data amount corresponding to each third server cluster;
and the second task distribution unit is used for distributing the service data to each third server cluster according to the loadable service data volume.
18. A batch task processing device comprising:
the trigger instruction acquisition module is used for acquiring a batch task processing request trigger instruction by the first server cluster;
the task splitting module is used for responding to the batch task processing request triggering instruction, splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
the first distribution module is used for distributing the second processing tasks in the second processing task set to a second server cluster in the cluster resources;
the processing task receiving module is used for receiving the second processing task distributed by the first server cluster by the second server cluster;
The service data acquisition module is used for acquiring the service data corresponding to the second processing task from the database according to the task identification of the second processing task;
the second distributing module is used for distributing the service data to a third server cluster in the cluster resources;
the task processing module is used for receiving the service data distributed by the second server cluster by the third server cluster and processing the service data;
splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
19. A batch task splitting device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
The method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
distributing the second processing task in the second processing task set to a second server cluster in the cluster resource; the second server cluster is used for capturing service data corresponding to the second processing task from a database according to the task identifier of the second processing task, and distributing the service data to a third server cluster in the cluster resource for processing;
splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
20. A batch task data retrieval device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to:
the second server cluster receives a second processing task distributed by the first server cluster; the distributing of the second processing task by the first server cluster specifically comprises the following steps: acquiring a batch task processing request triggering instruction; responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set; distributing the second processing task in the second processing task set to a second server cluster in the cluster resource;
according to the task identification of the second processing task, the business data corresponding to the second processing task is fished from a database;
distributing the service data to a third server cluster in the cluster resources;
splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
21. A batch task processing device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
the method comprises the steps that a first server cluster obtains a batch task processing request trigger instruction;
responding to the batch task processing request triggering instruction, splitting a first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set;
distributing the second processing task in the second processing task set to a second server cluster in the cluster resource;
the second server cluster receives a second processing task distributed by the first server cluster;
according to the task identification of the second processing task, the business data corresponding to the second processing task is fished from a database;
distributing the service data to a third server cluster in the cluster resources;
the third server cluster receives the service data distributed by the second server cluster and processes the service data;
Splitting the first processing task corresponding to the batch task processing request according to a splitting rule to obtain a second processing task set, wherein the method specifically comprises the following steps of:
and generating a second processing task corresponding to the task identifier according to the first processing task with the same task identifier, and obtaining the second processing task set.
CN201910043280.7A 2019-01-17 2019-01-17 Batch task processing method, device and equipment Active CN110008018B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910043280.7A CN110008018B (en) 2019-01-17 2019-01-17 Batch task processing method, device and equipment
CN202311051119.7A CN117076453A (en) 2019-01-17 2019-01-17 Batch task processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910043280.7A CN110008018B (en) 2019-01-17 2019-01-17 Batch task processing method, device and equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311051119.7A Division CN117076453A (en) 2019-01-17 2019-01-17 Batch task processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN110008018A CN110008018A (en) 2019-07-12
CN110008018B true CN110008018B (en) 2023-08-29

Family

ID=67165450

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202311051119.7A Pending CN117076453A (en) 2019-01-17 2019-01-17 Batch task processing method, device and equipment
CN201910043280.7A Active CN110008018B (en) 2019-01-17 2019-01-17 Batch task processing method, device and equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202311051119.7A Pending CN117076453A (en) 2019-01-17 2019-01-17 Batch task processing method, device and equipment

Country Status (1)

Country Link
CN (2) CN117076453A (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110795218B (en) * 2019-10-11 2022-03-01 口碑(上海)信息技术有限公司 Task scheduling system and method based on unitization
CN111176844A (en) * 2019-12-24 2020-05-19 天阳宏业科技股份有限公司 Real-time online batch optimization method and system for financial data
CN111367654A (en) * 2020-02-12 2020-07-03 吉利汽车研究院(宁波)有限公司 Data processing method and device based on heterogeneous cloud platform
CN111459640B (en) * 2020-04-03 2023-09-26 中国工商银行股份有限公司 Cross-platform batch job scheduling method and system
CN111563084A (en) * 2020-05-06 2020-08-21 中国银行股份有限公司 Batch fee deduction data processing method and device
CN111882427A (en) * 2020-07-27 2020-11-03 中国建设银行股份有限公司 Loan system data processing method, loan system data processing device, loan system data processing equipment and loan system data processing storage medium
CN114157661B (en) * 2020-09-07 2024-01-16 北京奇艺世纪科技有限公司 Data request method, data processing method, related device, equipment and system
CN113407429A (en) * 2021-06-23 2021-09-17 中国建设银行股份有限公司 Task processing method and device
CN113992684B (en) * 2021-10-26 2022-10-28 中电金信软件有限公司 Method, device, processing node, storage medium and system for sending data
CN117539643B (en) * 2024-01-09 2024-03-29 上海晨钦信息科技服务有限公司 Credit card sorting and counting platform, batch task processing method and server

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359295A (en) * 2007-08-01 2009-02-04 阿里巴巴集团控股有限公司 Batch task scheduling and allocating method and system
CN104981781A (en) * 2013-01-29 2015-10-14 Stg交互公司 Distributed computing architecture
CN105740063A (en) * 2014-12-08 2016-07-06 杭州华为数字技术有限公司 Data processing method and apparatus
CN106330987A (en) * 2015-06-15 2017-01-11 交通银行股份有限公司 Dynamic load balancing method
CN107301178A (en) * 2016-04-14 2017-10-27 阿里巴巴集团控股有限公司 Data query processing method, apparatus and system
WO2018149221A1 (en) * 2017-02-20 2018-08-23 京信通信系统(中国)有限公司 Device management method and network management system
CN108829790A (en) * 2018-06-01 2018-11-16 阿里巴巴集团控股有限公司 A kind of data batch processing method, apparatus and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359295A (en) * 2007-08-01 2009-02-04 阿里巴巴集团控股有限公司 Batch task scheduling and allocating method and system
CN104981781A (en) * 2013-01-29 2015-10-14 Stg交互公司 Distributed computing architecture
CN105740063A (en) * 2014-12-08 2016-07-06 杭州华为数字技术有限公司 Data processing method and apparatus
CN106330987A (en) * 2015-06-15 2017-01-11 交通银行股份有限公司 Dynamic load balancing method
CN107301178A (en) * 2016-04-14 2017-10-27 阿里巴巴集团控股有限公司 Data query processing method, apparatus and system
WO2018149221A1 (en) * 2017-02-20 2018-08-23 京信通信系统(中国)有限公司 Device management method and network management system
CN108829790A (en) * 2018-06-01 2018-11-16 阿里巴巴集团控股有限公司 A kind of data batch processing method, apparatus and system

Also Published As

Publication number Publication date
CN117076453A (en) 2023-11-17
CN110008018A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110008018B (en) Batch task processing method, device and equipment
WO2018177235A1 (en) Block chain consensus method and device
CN109615495B (en) Data reconciliation method, device, equipment and system
CN105159736B (en) A kind of construction method for the SaaS software deployment schemes for supporting performance evaluation
CN107291720B (en) Method, system and computer cluster for realizing batch data processing
CN109002357B (en) Resource allocation method and device and Internet of things system
CN105786603A (en) High-concurrency service processing system and method based on distributed mode
CN108574645A (en) A kind of array dispatching method and device
CN108900626A (en) Date storage method, apparatus and system under a kind of cloud environment
CN111160793A (en) Method, device and equipment for configuring number of self-service equipment of service network point
CN110389989B (en) Data processing method, device and equipment
CN109886804B (en) Task processing method and device
CN104182295A (en) Data backup method and data backup device
CN109614263B (en) Disaster tolerance data processing method, device and system
US8548881B1 (en) Credit optimization to minimize latency
US20060149611A1 (en) Peer to peer resource negotiation and coordination to satisfy a service level objective
CN104123303A (en) Method and device for providing data
CN115729714A (en) Resource allocation method, device, storage medium and electronic equipment
CN110275771A (en) A kind of method for processing business, Internet of Things billing infrastructure system and storage medium
US8538993B2 (en) Outsourced options management
CN115545639A (en) Financial business processing method and device, electronic equipment and storage medium
CN106548331B (en) Method and device for determining release sequence
CN114116908A (en) Data management method and device and electronic equipment
CN110032433B (en) Task execution method, device, equipment and medium
CN107402752B (en) Timing triggering method and device for application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201014

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20201014

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant