Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flow of a data processing method between clearing systems according to an embodiment of the present application, and in order to solve the problem that the data clearing process performed between clearing systems in the prior art will consume more computing resources of the clearing systems and cause more pressure on the clearing systems, the data processing method includes the following steps:
s101: the first liquidation system records the data to be liquidated with at least one second liquidation system.
The application takes the application scenario that the first clearing system is a clearing system of a third-party payment platform, and the second clearing system is a clearing system of a bank institution as an example. For the clearing system of the third-party payment platform, after a user performs a fund transfer (for example, transfers or pays to a fund account of a certain bank) through an account registered on the third-party payment platform, detailed data (i.e., to-be-cleared data) corresponding to each fund transfer operation can be generated and recorded. Generally, each piece of detail data corresponding to a funds-transfer operation may include information regarding the amount of money involved, the transferor, the point in time of occurrence, and the like. In an actual scene, the third-party payment platform can have a fund clearing requirement with clearing systems of a plurality of banking institutions, and classification storage can be performed on data to be cleared corresponding to each banking institution through the information of the transfer party contained in the detail data. For example, by determining whether or not the information of the transfer destination included in the detail data relates to "bank a", it is possible to record the detail data relating to "bank a" in a single detail table.
In the above scenario, in the process of clearing the fund data between the third-party payment platform and the banks, the third-party payment platform needs to perform clearing and summarizing of the fund data according to the clearing period of each bank, and provide corresponding detail data to the clearing systems of each bank for the checking of the clearing systems of the banks.
S102: when receiving a triggering instruction of a fishing task with preset time length as an interval, the first clearing system drags the data to be cleared, which are generated in a fishing time period corresponding to the current fishing task, from the recorded data to be cleared between the first clearing system and the second clearing system; and the preset duration is less than the duration of the liquidation period between the first liquidation system and the second liquidation system.
Referring to fig. 2, it is assumed that fig. 2 expresses the relationship between the third party payment platform and each time point in the clearing process of bank a. Wherein, the time length of the clearing period of the bank A is assumed to be 1 hour, and the clearing time point corresponding to the clearing task of the bank A is 21: generally, in order to ensure that when the liquidation task arrives, the detail data to be liquidated corresponding to the liquidation task is completely recorded in the database (for example, due to network delay and the like, the detail data occurring at 10: 00 is not recorded until 10: 10), the liquidation task needs to set a certain delay time length. For example, in fig. 2, the point in time of the liquidation task of bank a corresponds to time t5 (e.g., 21: 10) of each day, and the liquidation time period required by the liquidation task may be from time t0 (e.g., 20: 00) to time t3 (e.g., 21: 00), so as to ensure that the detail data to be liquidated is completely recorded in the database by setting a delay time period of 10 minutes (time period between t3 and t 5). And the time length of the sorting period is equal to the sorting time period.
In the above fig. 2, in the prior art, when the liquidation time point t5 (e.g. 21: 10) comes, the liquidation system of the third party payment platform needs to salvage the fund detail data between the third party payment platform and the bank a, which occurs between the time point t0 (e.g. 20: 00) and the time point t3 (e.g. 21: 00), generally, as for the third party payment platform, as the business progresses, the general amount of the fund detail data occurring between the time point t0 (e.g. 20: 00) and the time point t3 (e.g. 21: 00) is larger, and it often consumes more computing resources to salvage and perform the summation operation on the fund data at one time (the summation operation is to sum up the involved amount). And the third party payment platform and a plurality of other clearing systems (such as clearing systems of other banking institutions) have data clearing requirements, so that if the clearing time points of the clearing systems of a plurality of banks coincide (for example, the time t5 is the time t 5), the resource consumption of the clearing system of the third party payment platform is multiplied in a short time after the time t5 arrives, so that a large pressure is applied to the clearing system, and even the system is broken down in a serious case.
In order to solve the problem, the embodiment of the application sets the fishing task with preset time as an interval, wherein the preset time is less than the time of the sorting period. For example, if the time length of the sorting cycle is 1 hour, the preset time length may be set to 30 minutes, so that two fishing tasks are set before the sorting task at time t5 comes, the starting times of the two fishing tasks are time t2 (e.g., 20: 32) and time t4 (e.g., 21: 02), respectively, where the fishing task at time t2 may correspond to the fishing time period from time t0 (e.g., 20: 00) to time t1 (e.g., 20: 30); for the fishing task at the time t4, the corresponding fishing time period may be from time t1 (e.g., 20: 30) to time t3 (e.g., 21: 00). That is, by the two fishing tasks, the fund detail data occurring between the time t0 (e.g., 20: 00) to the time t3 (e.g., 21: 00) is fished in two batches, so that the fund detail data occurring between the time t0 (e.g., 20: 00) to the time t1 (e.g., 20: 30) of fishing and the fund detail data occurring between the time t1 (e.g., 20: 30) to the time t3 (e.g., 21: 00) are obtained respectively.
It should be noted that, in the embodiment of the present application, the sorting time period may be an integral multiple of the fishing time period, and the number of the fishing time periods divided in the sorting time period is not limited. For example, the sorting time period is 60 minutes, and the fishing time period is 10 minutes.
S103: and the first clearing system performs first preset processing on the fished data to be cleared, and corresponds and stores pre-summary result data obtained by the first preset processing and the fishing time period.
In this embodiment of the application, the first predetermined processing may include a process of classifying and sorting the data to be sorted, and a process of performing a summation operation (or a summary operation) on the data to be sorted.
In the above example, by summing the amounts of the retrieved detail data, the pre-summary result data (amounts) are obtained as follows:
batch number
|
Initial service time
|
End of service time
|
Amount of money
|
Number of strokes
|
15100130
|
151001 20:00:00
|
151001 20:30:00
|
1000
|
33
|
15100131
|
151001 20:30:00
|
151001 21:00:00
|
60
|
2
|
15100132
|
151001 21:00:00
|
151001 21:30:00
|
150
|
15
|
15100133
|
151001 21:30:00
|
151001 22:00:00
|
460
|
6 |
It can be seen that, when the clearing task required by the second clearing system (e.g. the clearing system of a bank) arrives, the clearing system of the third payment platform can timely and regularly salvage the detail data occurring within the corresponding salvage duration according to a smaller time granularity (smaller than the timing interval of the clearing task), and perform a summation operation in advance to obtain the pre-summarized result data in the above table, for example.
S104: when receiving a trigger instruction of a clearing task with the clearing cycle time length as an interval, the first clearing system determines each fishing time period corresponding to the clearing time period of the clearing task, and acquires pre-summary result data corresponding to each determined fishing time period from the stored pre-summary result data.
Continuing with the above example, assuming that the interval of the liquidation tasks of the second liquidation system is 1 hour (i.e. the length of the liquidation cycle), before the time t5 comes, the first liquidation system has scooped the detail data occurring between the time t0 and the time t3 in batches by the scooping task, and performs the summation operation accordingly to obtain the pre-summary result data, so that when the time t5 comes, the pre-summary result data with the batch numbers "15100130" and "15100131" are obtained from the pre-summary result data in the table only: "1000" and "60".
S105: and the first clearing system performs second preset processing on the acquired pre-summary result data corresponding to each fishing time period to obtain summary result data corresponding to the clearing task.
In this embodiment, the second predetermined processing may include a process of classifying and sorting the pre-summary result data, and a process of performing a summation operation (or a summary operation) on the pre-summary result data.
In the above example, pre-summary result data for batches "15100130" and "15100131" were obtained: after "1000" and "60", the summary result data (amount) corresponding to the liquidation task can be obtained by summing up: 1060. in the above process, it is not necessary to once fetch the detail data generated between the time t0 and the time t 3.
In addition, in the embodiment of the present application, after the step S105, the method further includes the following steps:
and generating a clearing instruction carrying the summary result data and sending the clearing instruction.
Wherein the generated clearing instruction may be an action sent to a second clearing system (e.g., a clearing system of a banking institution) to complete the clearing of the funds data by the second clearing system. Of course, in other embodiments, the generated clearing instructions may be sent to other types of clearing systems (e.g., a bank clearing system to effect clearing of funds between large banking institutions). In addition, the work of clearing the capital data can be realized in a manual clearing mode, so that clearing instructions do not need to be generated, and the summarized result data obtained by the clearing system is provided for clearing personnel to clear. The present application does not limit the manner of clearing the above-described capital data.
It should be mentioned that the clearing instruction sent by the first clearing system may also carry data to be cleared that occurs within the clearing time period, so that the clearing system that implements the clearing work of the capital data performs data checking to verify the accuracy of the summarized result data.
By setting the fishing task with the preset time length as the interval, wherein the preset time length is required to be less than the clearing period time length between the first clearing system and the second clearing system, when the clearing task with the clearing period time length as the interval arrives, the data to be cleared between the first clearing system and the second clearing system can be fished in batches in advance through the fishing task, and the pre-summarizing result data can be obtained and stored through the pre-summarizing operation. Therefore, when the sorting task arrives, only the pre-summary result data is needed to be acquired for summation operation, the data volume required to be fished when the sorting task arrives is reduced, and the operation resources required to be consumed in the summation operation process are relieved. And when the first clearing system and at least one second clearing system have clearing requirements, the situation that the operation resource consumption of the first clearing system is large due to the fact that clearing work with a plurality of second clearing systems needs to be completed at the same time point can be avoided, and further the pressure of the first clearing system is relieved.
It is worth mentioning that in the embodiment of the present application, for the clearing systems, other dimensions besides the time dimension may also be included, for example: the dimension of the type of business (or product). For the same liquidation system, the set liquidation cycle time length and the trigger time point of the liquidation task are different for different service types. For example, the length of the score cycle of the X service of bank B is 24 hours, the length of the score cycle of the Y service of bank B is 6 hours, and the length of the score cycle of the Z service of bank B is 12 hours. Therefore, the embodiment of the present application may set the fishing task corresponding to each service type according to the service type. In the embodiment of the present application, the step S102 specifically includes the following steps:
and when receiving a triggering instruction of a fishing task with preset time length as an interval, the first clearing system respectively fishes out the to-be-cleared data (namely fund detail data) which corresponds to each preset service type and occurs in the fishing time period corresponding to the current fishing task from the recorded to-be-cleared data between the first clearing system and the second clearing system according to the preset service types. For example, if the length of the clearing period of the X service of the B bank is 24 hours, the clearing system of the third party payment platform may set a fishing time period of 24/6 ═ 4 hours for the X service of the B bank; the length of the clearing cycle of the Y service of the B bank is 6 hours, and the fishing time period set by the clearing system of the third party payment platform for the Y service of the B bank may be 6/3-2 hours; the clearing cycle duration of the Z service of the B bank is 12 hours, and the fishing time period set by the clearing system of the third party payment platform for the Z service of the B bank may be 12/4-3 hours. That is, for different services, different lengths of the fishing time period and the intervals of the fishing tasks may be set.
Correspondingly, the step S104 specifically includes the following steps:
when receiving a trigger instruction of a clearing task which takes the clearing cycle time length as an interval and corresponds to a preset service type, the first clearing system determines each fishing time period corresponding to the clearing time period of the clearing task, and acquires pre-summary result data which corresponds to the preset service type and each determined fishing time period from the stored pre-summary result data. That is to say, for different services, the fishing tasks corresponding to each service are set respectively, and the fishing tasks between different services are independent from each other and do not affect each other, so that the data to be sorted corresponding to each service can be fished respectively and pre-summation operation can be performed.
In this embodiment of the application, before the step S102, the method further includes the following steps:
the first liquidation system determines a preset duration corresponding to the preset service type (such as the X service of the bank A) and inversely related to the number (such as about 100 ten thousand) according to the number (such as about 100 ten thousand) of data to be liquidated, which corresponds to the preset service type (such as the X service of the bank A), occurring in a liquidation time period (such as 1 hour) between the first liquidation system (such as the liquidation system of the third-party payment platform) and the second liquidation system (such as the liquidation system of the bank A). That is to say, if the larger the data amount of a certain preset service type in a certain time period is, the shorter the preset duration of the fishing task can be set to relieve the pressure of the clearing system, so that the data amount fished by the fishing task each time can be in a more reasonable range, and the consumption of the summation operation process after the fishing task each time on the system operation resources is controlled.
In this embodiment of the application, before the step S102, the method further includes the following steps:
the first liquidation system determines a preset time length inversely related to the number of the data to be liquidated occurring in the liquidation time period between the first liquidation system and the second liquidation system. Similarly, if the division is not performed according to the service types, the magnitude of the order between the two sorting systems in the sorting time period can be counted to adjust the interval of the fishing task. It is worth mentioning that the above-mentioned order of magnitude between the two liquidation systems in the liquidation period may be real-time data or historical data.
Corresponding to the method flow, the embodiment of the application also provides a data clearing system. The function and effect of each unit in the device are similar to the function and effect of each step in the method, and reference is made to the description of the embodiments of the method in the following description with reference to fig. 3.
As shown in fig. 3, in the embodiment of the present application, the data clearing system includes:
a recording unit 101, configured to record data to be liquidated with at least one second liquidation system.
The fishing unit 102 is configured to, when receiving a trigger instruction of a fishing task at intervals of a preset time duration, fish the to-be-sorted data occurring within a fishing time period corresponding to a current fishing task from the recorded to-be-sorted data between the second clearing system and the fishing unit; and the preset duration is less than the duration of the liquidation period between the first liquidation system and the second liquidation system.
And the pre-summarizing unit 103 is used for performing first preset processing on the salvaged data to be sorted, and corresponding and storing pre-summarizing result data obtained through the first preset processing with the salvage time period.
The pre-summary result obtaining unit 104 is configured to, when receiving a trigger instruction of the score clearing task at an interval of the score clearing cycle duration, determine each fishing time period corresponding to the score clearing time period of the score clearing task, and obtain pre-summary result data corresponding to each determined fishing time period from the stored pre-summary result data.
And the clearing and summarizing unit 105 is used for performing second predetermined processing on the acquired pre-summarizing result data corresponding to each fishing time period to obtain summarizing result data corresponding to the clearing and summarizing task.
In the system, by setting the fishing task with the preset time length as the interval, wherein the preset time length is required to be less than the time length of the clearing period between the first clearing system and the second clearing system, when the clearing task with the time length of the clearing period as the interval arrives, the data to be cleared between the first clearing system and the second clearing system can be fished in batches in advance through the fishing task, and the pre-summary result data can be obtained and stored by performing pre-first predetermined processing (including classification, sorting and summation operation). Therefore, when the sorting task arrives, only the pre-summary result data is needed to be acquired to perform second predetermined processing (including classification, sorting and summation operation), the data volume required to be fished when the sorting task arrives is reduced, and the operation resources required to be consumed in the sorting process are relieved. And when the first clearing system and at least one second clearing system have clearing requirements, the situation that the operation resource consumption of the first clearing system is large due to the fact that clearing work with a plurality of second clearing systems needs to be completed at the same time point can be avoided, and further the pressure of the first clearing system is relieved.
In this embodiment of the application, the fishing unit 102 is specifically configured to:
when a triggering instruction of a fishing task with preset time length as an interval is received, respectively fishing the to-be-cleaned data which correspond to each preset service type and occur in the fishing time period corresponding to the current fishing task according to the preset service type from the recorded to-be-cleaned data between the second clearing system and the second clearing system;
then, the pre-summary result obtaining unit 104 is specifically configured to:
when a trigger instruction of a sorting task which takes the time length of the sorting cycle as an interval and corresponds to a preset service type is received, determining each fishing time period corresponding to the sorting time period of the sorting task, and acquiring pre-summary result data which corresponds to the preset service type and corresponds to each determined fishing time period from the stored pre-summary result data. That is, for different services, different lengths of the fishing time period and the intervals of the fishing tasks may be set.
In an embodiment of the present application, the system further includes:
and the preset duration determining unit is used for determining the preset duration which corresponds to the preset service type and is inversely related to the number according to the number of the data to be liquidated corresponding to the preset service type in the liquidation cycle duration between the first liquidation system and the second liquidation system.
In an embodiment of the present application, the system further includes:
and the preset duration determining unit is used for determining the preset duration inversely related to the number according to the number of the data to be liquidated occurring in the liquidation period duration between the first liquidation system and the second liquidation system.
In an embodiment of the present application, the system further includes:
and the preset duration determining unit is used for dividing the clearing cycle duration between the first clearing system and the second clearing system by a preset integer and determining the obtained remainder as the preset duration.
In an embodiment of the present application, the system further includes:
and the sending unit is used for generating and sending the clearing instruction carrying the summary result data. Wherein the generated clearing instruction may be an action sent to a second clearing system (e.g., a clearing system of a banking institution) to complete the clearing of the funds data by the second clearing system. Of course, in other embodiments, the generated clearing instructions may be sent to other types of clearing systems (e.g., a bank clearing system to effect clearing of funds between large banking institutions). In addition, the work of clearing the capital data can be realized in a manual clearing mode, so that clearing instructions do not need to be generated, and the summarized result data obtained by the clearing system is provided for clearing personnel to clear. The present application does not limit the manner of clearing the above-described capital data. It is worth mentioning that the clearing instruction may also carry data to be cleared that occurs within the clearing time period, so that a clearing system that implements the clearing work of the capital data performs data checking to verify the accuracy of the summarized result data.
In the embodiment of the application, for the data clearing process of personalized clearing periods (namely different clearing periods required by different clearing systems), the fishing tasks with preset time length (less than the time length of the clearing period required by each clearing system) as intervals are set. The method comprises the steps that first preset processing is carried out on data obtained by a salvaging task, pre-summary result data are obtained and stored, when clearing time points required by each clearing system come, only the pre-summary result data (generally, the data quantity of the pre-summary result data is far smaller than that of original data to be cleared) need to be obtained, second preset processing can be carried out on the obtained pre-summary result data, the summary result data corresponding to a clearing period are obtained and cleared, and therefore the uncontrollable clearing tasks with large stability impact on the clearing systems (such as a clearing system of a third-party payment platform) can be controlled through the scheme, and meanwhile execution efficiency of the clearing tasks is improved. It is worth mentioning that the present solution can also be used for data clearing work between one banking institution and another.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.