CN114298830A - Batch service processing method and device and batch service processing platform - Google Patents

Batch service processing method and device and batch service processing platform Download PDF

Info

Publication number
CN114298830A
CN114298830A CN202111633593.1A CN202111633593A CN114298830A CN 114298830 A CN114298830 A CN 114298830A CN 202111633593 A CN202111633593 A CN 202111633593A CN 114298830 A CN114298830 A CN 114298830A
Authority
CN
China
Prior art keywords
batch
files
state
processed
service system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111633593.1A
Other languages
Chinese (zh)
Inventor
邓萍
孙景雷
岳洪芳
刘雨
高磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Postal Savings Bank of China Ltd
Original Assignee
Postal Savings Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Postal Savings Bank of China Ltd filed Critical Postal Savings Bank of China Ltd
Priority to CN202111633593.1A priority Critical patent/CN114298830A/en
Publication of CN114298830A publication Critical patent/CN114298830A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a batch service processing method and device and a batch service processing platform. The method comprises the following steps: acquiring files of batch services to be processed in the data migration process of the old service system and the new service system to obtain batch files; splitting the batch files into a first batch of files representing an untransferred state and a second batch of files representing a migrated state according to the migration state, wherein the migration state at least comprises the untransferred state and the migrated state, the untransferred state is used for representing that the client data is not migrated to the new service system, and the migrated state is used for representing that the client data is migrated to the new service system; respectively sending the first batch of files and the second batch of files to an old service system and a new service system; and receiving and combining the result files after the first batch of files and the second batch of files are processed in batch to obtain the processing result file. The method realizes rapid and non-inductive batch service processing under the condition that new and old service systems coexist, and improves user experience.

Description

Batch service processing method and device and batch service processing platform
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for processing a batch service, a computer-readable storage medium, a processor, and a batch service processing platform.
Background
Most of traditional bank new and old system switching needs to stop the service of the existing bank core system, then synchronize the data in the existing bank core system to the new bank core system, verify the correctness and validity of the data, and then start the new bank core system service, so that the upgrade of the bank core system is completed.
The above information disclosed in this background section is only for enhancement of understanding of the background of the technology described herein and, therefore, certain information may be included in the background that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
The present application mainly aims to provide a method and an apparatus for processing a batch service, a computer-readable storage medium, a processor, and a batch service processing platform, so as to solve the problem that a batch service cannot be processed in service system migration in the prior art.
According to an aspect of the embodiments of the present invention, a method for processing a batch service is provided, including: in the process of data migration of an old service system and a new service system, obtaining files of batch services to be processed to obtain batch files, wherein the batch services to be processed comprise a plurality of services to be processed; splitting the batch files into a first batch of files and a second batch of files according to a migration state, wherein the migration state at least comprises a non-migration state and a migration state, client data corresponding to-be-processed services of the first batch of files is in the non-migration state, the client data corresponding to-be-processed services of the second batch of files is in the migration state, the non-migration state is used for representing that the client data is not migrated from the old service system to the new service system, and the migration state is used for representing that the client data is migrated from the old service system to the new service system; sending the first batch of files to the old service system, and sending the second batch of files to the new service system; and receiving the result files after the first batch of files and the second batch of files are processed in batch and combining the result files to obtain the processed result files.
Optionally, receiving the result file after batch processing of the first batch of files and the second batch of files includes: under the condition of receiving a result file of the service to be processed, generating a result file message, wherein the result file message corresponds to the result file one to one; updating a corresponding flow record according to the result file message, so that processing state information of the flow record is updated to a processed state from a processing state, the processing state is used for representing that the service to be processed is in processing, the processed state is used for representing that the service to be processed is processed completely, and the flow record is generated by the old service system or the new service system according to the service to be processed; inquiring the flow record of the service to be processed to generate a flow extraction file, wherein the flow extraction file at least comprises processing state information of the flow record; and determining to receive result files of all the services to be processed under the condition that all the processing state information in the stream extraction file is the processed state.
Optionally, receiving the result file after batch processing of the first batch of files and the second batch of files, further includes: and generating alarm information under the condition that the processing state information of the processing state exists in the stream extraction file when the stream record of the service to be processed is inquired for a preset number of times.
Optionally, the migration state further includes an absent state and a locked state, where the absent state is used to characterize a migration state of the client data of the service to be processed, and the locked state is used to characterize that the client data of the service to be processed is in migration, and the batch file is split into a first batch file and a second batch file according to the migration state, further including: and synthesizing first service files in the batch files into the first batch files, and synthesizing second service files in the batch files into the second batch files, wherein the first service files are files of the to-be-processed service with the client data in the non-existing state or the non-migrated state, and the second service files are files of the to-be-processed service with the client data in the locked state or the migrated state.
Optionally, before splitting the batch file into a first batch file and a second batch file according to the migration state, the method further includes: inquiring the migration state of target customer data, wherein the target customer data is customer data corresponding to the service to be processed; marking the target customer data in the non-migration state, so that the marked target customer data is not subjected to migration processing before the batch business processing to be processed is completed.
Optionally, before sending the first batch of files to the old business system and sending the second batch of files to the new business system, the method further includes: and converting the formats of the first batch of files into a first message format, and converting the formats of the second batch of files into a second message format, wherein the first message format is a message format supported by the old service system, and the second message format is a message format supported by the new service system.
According to another aspect of the embodiments of the present invention, there is also provided a device for processing a batch service, including: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring files of batch services to be processed in the process of data migration of an old service system and a new service system to obtain the batch files, and the batch services to be processed comprise a plurality of services to be processed; a first processing unit, configured to split the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, and the client data corresponding to the service to be processed of the second batch of files is in the migrated state, where the un-migrated state is used to represent that the client data is not migrated from the old service system to the new service system, and the migrated state is used to represent that the client data is migrated from the old service system to the new service system; a sending unit, configured to send the first batch of files to the old service system, and send the second batch of files to the new service system; and the second processing unit is used for receiving the result files after the first batch of files and the second batch of files are processed in batches and combining the result files to obtain the processing result files.
According to still another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program executes any one of the methods.
According to still another aspect of the embodiments of the present invention, there is further provided a processor, configured to execute a program, where the program executes any one of the methods.
According to another aspect of the embodiments of the present invention, there is also provided a batch service processing platform, including an old service system, a new service system, and a batch service processing apparatus, where the batch service processing apparatus is configured to execute any one of the methods.
In the embodiment of the present invention, in the method for processing a batch service, first, in a process of data migration between an old service system and a new service system, a file of the batch service to be processed is obtained to obtain a batch file, where the batch service to be processed includes a plurality of services to be processed; then, splitting the batch files into a first batch of files and a second batch of files according to a migration state, wherein the migration state at least comprises an un-migrated state and a migrated state, client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, the client data corresponding to the service to be processed of the second batch of files is in the migrated state, the un-migrated state is used for representing that the client data is not migrated from the old service system to the new service system, and the migrated state is used for representing that the client data is migrated from the old service system to the new service system; then, the first batch of files are sent to the old service system, and the second batch of files are sent to the new service system; and finally, receiving the result files after the batch processing of the first batch of files and the second batch of files, and merging the result files to obtain the processing result files. According to the method, the batch files are split into the first batch files and the second batch files according to the migration state, the service to be processed corresponding to the client data which is not migrated is processed in the old service system, the service to be processed corresponding to the migrated client data is processed in the new service system, and finally the result files obtained through processing are combined to obtain the processing result file.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 shows a flow diagram of a method of processing a batch of traffic according to an embodiment of the application;
fig. 2 is a schematic structural diagram of a processing apparatus for batch traffic according to an embodiment of the present application;
FIG. 3 shows a flow diagram of batch file processing according to an embodiment of the present application;
FIG. 4 shows a flow diagram of bulk file splitting according to an embodiment of the present application;
FIG. 5 shows a flow diagram of bulk file consolidation according to an embodiment of the present application;
FIG. 6 illustrates a system diagram of a method of processing batch traffic according to an embodiment of the present application;
FIG. 7 shows a flow diagram of a pipeline extraction according to an embodiment of the present application;
FIG. 8 shows a flow diagram of a pipeline extraction according to an embodiment of the present application;
FIG. 9 illustrates a flow diagram for determining a migration status according to an embodiment of the present application;
fig. 10 is a flowchart illustrating a method for processing a batch service according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As mentioned in the background, in order to solve the above problem, in the service system migration in the prior art, a batch service processing method, an apparatus, a computer-readable storage medium, a processor, and a batch service processing platform are provided in an exemplary embodiment of the present application.
According to an embodiment of the application, a method for processing a batch service is provided.
Fig. 1 is a flowchart of a method for processing a batch service according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, in the process of data migration of an old service system and a new service system, obtaining files of batch services to be processed to obtain batch files, wherein the batch services to be processed comprise a plurality of services to be processed;
step S102, splitting the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, and the client data corresponding to the service to be processed of the second batch of files is in the migrated state, where the un-migrated state is used to represent that the client data is not migrated from the old service system to the new service system, and the migrated state is used to represent that the client data is migrated from the old service system to the new service system;
step S103, sending the first batch of files to the old service system and sending the second batch of files to the new service system;
and step S104, receiving the result files after the batch processing of the first batch of files and the second batch of files, and merging the result files to obtain the processing result files.
In the method for processing the batch services, firstly, files of the batch services to be processed are obtained in the process of data migration of an old service system and a new service system, and the batch files are obtained, wherein the batch services to be processed comprise a plurality of services to be processed; then, splitting the batch files into a first batch of files and a second batch of files according to a migration state, wherein the migration state at least comprises an un-migrated state and a migrated state, client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, the client data corresponding to the service to be processed of the second batch of files is in the migrated state, the un-migrated state is used for representing that the client data is not migrated from the old service system to the new service system, and the migrated state is used for representing that the client data is migrated from the old service system to the new service system; then, the first batch of files are sent to the old service system, and the second batch of files are sent to the new service system; and finally, receiving the result files after the batch processing of the first batch of files and the second batch of files, and merging the result files to obtain the processing result files. According to the method, the batch files are split into the first batch files and the second batch files according to the migration state, the service to be processed corresponding to the client data which is not migrated is processed in the old service system, the service to be processed corresponding to the migrated client data is processed in the new service system, and finally the result files obtained through processing are combined to obtain the processing result file.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The above-mentioned migration status needs to be queried by the customer routing location system. The new business system uses a distributed system of an x86pc server architecture, and the old business system adopts a mini-machine cluster architecture. The batch files are stored in a peripheral system, and the peripheral system uses a client server architecture mode. The program of the method can be stored in the switching transition system, and all the applications or database servers in the switching transition system adopt Linux operating systems. The existence of the switching transition system shields the difference of the new service system and the old service system in terms of architecture, so that the switching transition system supports the mutual circulation of batch file processing among heterogeneous systems.
Fig. 3 is a schematic diagram showing a flow of batch file processing, and as shown in fig. 3, a peripheral system sends batch processing files in a file form through an Enterprise Data Bus (EDB), an online switching transition system calls a download file interface, downloads EDB files and registers streaming, performs file splitting, determines a client migration state through client routing positioning in the splitting process, calls a batch gateway to upload split files, returns file processing results, sends the split files to new and old service systems respectively, and returns response processing results in a file form after the new and old service systems have finished processing.
In a specific embodiment, fig. 4 shows a flowchart of batch file splitting. As shown in fig. 4, a file is downloaded to the local according to a file dfs path sent by the batch gateway, a corresponding task is called to execute file splitting, a migration state of a client is located and inquired according to a client route, the file is split into two files sent to a new service system and an old service system, file header information is recalculated, a file dfs is uploaded after the splitting is completed, and an execution result is returned to the batch gateway.
In another specific embodiment, fig. 5 is a schematic flow chart illustrating batch file merging. As shown in fig. 5, two result files returned by the new service and the old service are downloaded to the local according to a file dfs path sent by the batch gateway, a corresponding task is called to execute file merging, file header information and file tail information are recalculated, after merging, dfs is uploaded, and an execution result is returned to the batch gateway.
In practical applications, as shown in fig. 6, there are several different business processing scenarios between different systems: scene one: the peripheral system sends the batch files to a switching transition system, and the switching transition system distributes the batch files to the old service system and the new service system according to the client state; scene two: the peripheral system sends batch files to a switching transition system, and the switching transition system distributes a single service to a new service system and an old service system; scene three: the peripheral system sends batch files to a switching transition system, the switching transition system sends a single service to an old service system, and sends the batch files to a new service system; scene four: the peripheral system sends the batch files to the switching transition system, the switching transition system sends the batch files to the old service system, and sends the single service to the new service system. When the service is a single service, batch application processing is not needed, and the new service system and the old service system can be processed by themselves.
In an embodiment of the application, receiving a result file after batch processing of the first batch of files and the second batch of files includes: under the condition of receiving a result file of the service to be processed, generating a result file message, wherein the result file message corresponds to the result file one to one; updating a corresponding flow record according to the result file message, so that processing state information of the flow record is updated from a processing state to a processed state, the processing state is used for representing that the service to be processed is in processing, the processed state is used for representing that the service to be processed is processed completely, and the flow record is generated by the old service system or the new service system according to the service to be processed; inquiring the flow record of the service to be processed to generate a flow extraction file, wherein the flow extraction file at least comprises processing state information of the flow record; and determining that all result files of the to-be-processed service are received under the condition that all the processing state information in the stream extraction file is in the processed state. In the embodiment, when the result file is received, a result file message is generated to update a running water record, the running water record generates a running water extraction file, and then the receiving condition of the result file is determined by inquiring the processing state information in the running water extraction file.
In practical applications, as shown in fig. 7, an old service system message notification interface is used to query a message, obtain a result file message, and then determine whether it is necessary to update a flow record according to the result file message, if a new flow record is generated, update is performed, if no new flow record is generated, a message notification confirmation interface of the old service system is called, then determine whether all states are processed, if all files are received, an EDB file reception flow is called, and if not, the EDB file reception flow is queried again after a predetermined time interval.
In a specific embodiment of the present application, fig. 8 shows a flow diagram of the pipeline extraction. As shown in fig. 8, after receiving the batch gateway pipelining extraction request, directly returning, starting with another thread to execute a pipelining extraction task, querying the online watermeter according to the time information sent by the gateway, writing the query result into a local file (paging query, multiple times of writing), uploading dfs to the file after the writing is completed, and notifying the batch gateway of the processing result after the task is successfully executed or fails to be processed.
In another embodiment of the present application, receiving a result file after batch processing of the first batch of files and the second batch of files further includes: and generating alarm information when the processing state information of the processing state exists in the flow extraction file when the flow record of the service to be processed is inquired for a preset number of times. In the embodiment, the alarm information is generated only after the chronological query running record reaches the preset times, so that the occurrence of misjudgment due to the fact that the running record is not updated timely is avoided.
Specifically, the predetermined number of times needs to be set according to actual conditions, and the warning information may be a text warning, a voice warning, or a transmission to a user terminal to prompt a user in time.
In another embodiment of the application, as shown in fig. 9, the migration state further includes an absent state and a locked state, where the absent state is used to represent a migration state of the client data that is not queried for the service to be processed, and the locked state is used to represent that the client data of the service to be processed is in migration, and the batch file is split into a first batch file and a second batch file according to the migration state, and the method further includes: and synthesizing a first service file in the batch files into the first batch file, and synthesizing a second service file in the batch files into the second batch file, wherein the first service file is a file of the service to be processed with the client data in the non-existing state or the non-migrated state, and the second service file is a file of the service to be processed with the client data in the locked state or the migrated state. In this embodiment, when the migration state is the non-existing state, the first service file needs to be sent to the old service system again, and then the first service file is migrated from the old service system to the new service system; when the migration state is the locking state, the data is migrated to the new service system, so that the service file does not need to be sent to the old service system, the service file is directly sent to the new service system, the service file of the new service system is supplemented, and part of the service file is prevented from being omitted, so that the efficiency of batch processing of services is further improved.
In a specific embodiment of the present application, when the migration status is a locked status, the migration status at this time is recorded as 0-to-be-retried, the migration status is recorded in a temporary file, after the processing of the old service system is completed, the updated migration status is supplemented as 1-retriable according to a result file, then processing status information in the query running extraction file is queried, and when the statuses are all migrated, a second batch of files is regenerated. In addition, when the old business system processes batch transactions, if it is determined that the account is being locked or has been migrated, it is necessary to return specific state information in the response state. After the switching transition or association system receives the batch response, the batch processing needs to be reinitiated for the accounts which are locked and migrated according to the response code.
In another specific embodiment of the present application, as shown in fig. 9, when a batch status query is received, it is considered that batch transactions are subsequently performed on the batch accounts, the batch clients are marked as "in batch processing", and data is migrated to the clients in batch processing, and no migration processing is performed on the same day, so as to further save migration time.
In order to further improve the efficiency of batch processing, in another embodiment of the present application, before splitting the batch file into the first batch file and the second batch file according to the migration status, the method further includes: inquiring the migration state of target customer data, wherein the target customer data is customer data corresponding to the service to be processed; and marking the target client data in the non-migration state, so that the marked target client data is not subjected to migration processing before the batch business processing to be processed is completed. In this embodiment, when the client routing location receives the batch status query, it is considered that batch transaction will be performed on the batch account subsequently, the batch client data is marked as "in batch processing", and the data is migrated to the client in batch processing, and no migration processing is performed on the current day.
It should be added that when querying the routing location of the client, if the query response information is "system error" or other unknown error, the query is re-initiated, if the query is repeated for a certain number of times, the error is still reported, and the abnormal information is fed back by monitoring.
In yet another embodiment of the present application, before sending the first batch of files to the old service system and sending the second batch of files to the new service system, the method further includes: and converting the formats of the first batch of files into a first message format, and converting the formats of the second batch of files into a second message format, wherein the first message format is a message format supported by the old service system, and the second message format is a message format supported by the new service system. The message formats of the new service system and the old service system are different, so in this embodiment, the batch files need to be converted into the corresponding message format and then sent to the corresponding system, so that the new service system and the old service system can identify the batch files.
The first message format uses TUXEDO FML format, that is, all communication messages are composed of a series of FML domains, the second message format uses HTTP JSON format, and the message format of the old service system is different from that of the new service system. Of course, in practical applications, the first message format and the second message format are not limited to the two formats, and may be other formats, and those skilled in the art may select the formats according to practical situations.
In a specific embodiment, the converted message is sent to the adapter, and the adapter can continue to send the corresponding system after analyzing the message.
The embodiment of the present application further provides a device for processing a batch service, and it should be noted that the device for processing a batch service according to the embodiment of the present application may be used to execute the method for processing a batch service according to the embodiment of the present application. The following describes a processing apparatus for batch services provided in an embodiment of the present application.
Fig. 2 is a schematic diagram of a processing apparatus for batch traffic according to an embodiment of the present application. As shown in fig. 2, the apparatus includes:
an obtaining unit 10, configured to obtain files of a batch service to be processed during data migration between an old service system and a new service system, so as to obtain a batch of files, where the batch service to be processed includes multiple services to be processed;
a first processing unit 20, configured to split the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, the client data corresponding to the service to be processed of the second batch of files is in the migrated state, the un-migrated state is used to indicate that the client data is not migrated from the old service system to the new service system, and the migrated state is used to indicate that the client data is migrated from the old service system to the new service system;
a sending unit 30, configured to send the first batch of files to the old service system, and send the second batch of files to the new service system;
and a second processing unit 40, configured to receive the result files after batch processing of the first batch of files and the second batch of files, and merge the result files to obtain a processing result file.
The device comprises an acquisition unit, a first processing unit, a sending unit and a second processing unit, wherein the acquisition unit is used for acquiring files of batch services to be processed in the data migration process of an old service system and a new service system to obtain the batch files, and the batch services to be processed comprise a plurality of services to be processed; the first processing unit is configured to split the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, the client data corresponding to the service to be processed of the second batch of files is in the migrated state, the un-migrated state is used to indicate that the client data is not migrated from the old service system to the new service system, and the migrated state is used to indicate that the client data is migrated from the old service system to the new service system; the sending unit is used for sending the first batch of files to the old service system and sending the second batch of files to the new service system; the second processing unit is used for receiving the result files after the first batch of files and the second batch of files are processed in batches and combining the result files to obtain the processing result files. The device splits the batch files into a first batch of files and a second batch of files according to the migration state, so that the to-be-processed service corresponding to the client data which is not migrated is processed in the old service system, the to-be-processed service corresponding to the migrated client data is processed in the new service system, and finally the processed result files are combined to obtain the processed result file.
The above-mentioned migration status needs to be queried by the customer routing location system. The new business system uses a distributed system of an x86pc server architecture, and the old business system adopts a mini-machine cluster architecture. The batch files are stored in a peripheral system, and the peripheral system uses a client server architecture mode. The program of the method can be stored in the switching transition system, and all the applications or database servers in the switching transition system adopt Linux operating systems. The existence of the switching transition system shields the difference of the new service system and the old service system in terms of architecture, so that the switching transition system supports the mutual circulation of batch file processing among heterogeneous systems.
Fig. 3 is a schematic diagram showing a flow of batch file processing, and as shown in fig. 3, a peripheral system sends batch processing files in a file form through an Enterprise Data Bus (EDB), an online switching transition system calls a download file interface, downloads EDB files and registers streaming, performs file splitting, determines a client migration state through client routing positioning in the splitting process, calls a batch gateway to upload split files, returns file processing results, sends the split files to new and old service systems respectively, and returns response processing results in a file form after the new and old service systems have finished processing.
In a specific embodiment, fig. 4 shows a flowchart of batch file splitting. As shown in fig. 4, a file is downloaded to the local according to a file dfs path sent by the batch gateway, a corresponding task is called to execute file splitting, a migration state of a client is located and inquired according to a client route, the file is split into two files sent to a new service system and an old service system, file header information is recalculated, a file dfs is uploaded after the splitting is completed, and an execution result is returned to the batch gateway.
In another specific embodiment, fig. 5 is a schematic flow chart illustrating batch file merging. As shown in fig. 5, two result files returned by the new service and the old service are downloaded to the local according to a file dfs path sent by the batch gateway, a corresponding task is called to execute file merging, file header information and file tail information are recalculated, after merging, dfs is uploaded, and an execution result is returned to the batch gateway.
In practical applications, as shown in fig. 6, there are several different business processing scenarios between different systems: scene one: the peripheral system sends the batch files to a switching transition system, and the switching transition system distributes the batch files to the old service system and the new service system according to the client state; scene two: the peripheral system sends batch files to a switching transition system, and the switching transition system distributes a single service to a new service system and an old service system; scene three: the peripheral system sends batch files to a switching transition system, the switching transition system sends a single service to an old service system, and sends the batch files to a new service system; scene four: the peripheral system sends the batch files to the switching transition system, the switching transition system sends the batch files to the old service system, and sends the single service to the new service system. When the service is a single service, batch application processing is not needed, and the new service system and the old service system can be processed by themselves.
In an embodiment of the application, the second processing unit includes a first generating module, an updating module, an inquiring module, and a determining module, where the first generating module is configured to generate a result file message under the condition that a result file of the service to be processed is received, where the result file message corresponds to the result file one to one; the updating module is configured to update a corresponding flow record according to the result file message, so that processing state information of the flow record is updated from a processing state to a processed state, where the processing state is used to represent that the service to be processed is in processing, the processed state is used to represent that the service to be processed is processed, and the flow record is generated by the old service system or the new service system according to the service to be processed; the query module is used for querying the flow record of the service to be processed to generate a flow extraction file, and the flow extraction file at least comprises processing state information of the flow record; the determining module is configured to determine that a result file of all the to-be-processed services is received when all the processing state information in the pipeline extraction file is in the processed state. In the embodiment, when the result file is received, a result file message is generated to update a running water record, the running water record generates a running water extraction file, and then the receiving condition of the result file is determined by inquiring the processing state information in the running water extraction file.
In practical applications, as shown in fig. 7, an old service system message notification interface is used to query a message, obtain a result file message, and then determine whether it is necessary to update a flow record according to the result file message, if a new flow record is generated, update is performed, if no new flow record is generated, a message notification confirmation interface of the old service system is called, then determine whether all states are processed, if all files are received, an EDB file reception flow is called, and if not, the EDB file reception flow is queried again after a predetermined time interval.
In a specific embodiment of the present application, fig. 8 shows a flow diagram of the pipeline extraction. As shown in fig. 8, after receiving the batch gateway pipelining extraction request, directly returning, starting with another thread to execute a pipelining extraction task, querying the online watermeter according to the time information sent by the gateway, writing the query result into a local file (paging query, multiple times of writing), uploading dfs to the file after the writing is completed, and notifying the batch gateway of the processing result after the task is successfully executed or fails to be processed.
In yet another embodiment of the application, the second processing unit further includes a second generating module, where the second generating module is configured to generate an alarm message when the processing state information of the processing state exists in the pipeline extraction file when the pipeline record of the to-be-processed service is queried for a predetermined number of times. In the embodiment, the alarm information is generated only after the chronological query running record reaches the preset times, so that the occurrence of misjudgment due to the fact that the running record is not updated timely is avoided.
Specifically, the predetermined number of times needs to be set according to actual conditions, and the warning information may be a text warning, a voice warning, or a transmission to a user terminal to prompt a user in time.
In another embodiment of the present application, as shown in fig. 9, the transition state further includes an absence state and a lock state, the absence state is used to characterize a migration state of the client data that is not queried for the pending service, the locking state is used for representing that the client data of the service to be processed is in migration, the first processing unit further comprises a synthesis module, wherein, the synthesis module is used for synthesizing the first service files in the batch files into the first batch files and synthesizing the second service files in the batch files into the second batch files, the first service file is a file of the pending service in which the client data is in the non-existing state or the non-migrated state, the second service file is a file of the pending service in which the client data is in the locked state or the migrated state. In this embodiment, when the migration state is the non-existing state, the first service file needs to be sent to the old service system again, and then the first service file is migrated from the old service system to the new service system; when the migration state is the locking state, the data is migrated to the new service system, so that the service file does not need to be sent to the old service system, the service file is directly sent to the new service system, the service file of the new service system is supplemented, and part of the service file is prevented from being omitted, so that the efficiency of batch processing of services is further improved.
In a specific embodiment of the present application, when the migration status is a locked status, the migration status at this time is recorded as 0-to-be-retried, the migration status is recorded in a temporary file, after the processing of the old service system is completed, the updated migration status is supplemented as 1-retriable according to a result file, then processing status information in the query running extraction file is queried, and when the statuses are all migrated, a second batch of files is regenerated. In addition, when the old business system processes batch transactions, if it is determined that the account is being locked or has been migrated, it is necessary to return specific state information in the response state. After the switching transition or association system receives the batch response, the batch processing needs to be reinitiated for the accounts which are locked and migrated according to the response code.
In another specific embodiment of the present application, as shown in fig. 9, when a batch status query is received, it is considered that batch transactions are subsequently performed on the batch accounts, the batch clients are marked as "in batch processing", and data is migrated to the clients in batch processing, and no migration processing is performed on the same day, so as to further save migration time.
In order to further improve the efficiency of batch processing services, in another embodiment of the present application, the apparatus further includes a query unit and a marking unit, where the query unit is configured to query a migration state of target client data before splitting the batch files into a first batch of files and a second batch of files according to the migration state, where the target client data is client data corresponding to the service to be processed; the marking unit is used for marking the target client data in the non-migration state, so that the marked target client data is not subjected to migration processing before the batch business processing to be processed is completed. In this embodiment, when the client routing location receives the batch status query, it is considered that batch transaction will be performed on the batch account subsequently, the batch client data is marked as "in batch processing", and the data is migrated to the client in batch processing, and no migration processing is performed on the current day.
It should be added that when querying the routing location of the client, if the query response information is "system error" or other unknown error, the query is re-initiated, if the query is repeated for a certain number of times, the error is still reported, and the abnormal information is fed back by monitoring.
In yet another embodiment of the present application, the apparatus further includes a conversion unit, where the conversion unit is configured to convert the format of the first batch of files into a first message format and convert the format of the second batch of files into a second message format before sending the first batch of files to the old service system and sending the second batch of files to the new service system, where the first message format is a message format supported by the old service system, and the second message format is a message format supported by the new service system. The message formats of the new service system and the old service system are different, so in this embodiment, the batch files need to be converted into the corresponding message format and then sent to the corresponding system, so that the new service system and the old service system can identify the batch files.
The first message format uses TUXEDO FML format, that is, all communication messages are composed of a series of FML domains, the second message format uses HTTP JSON format, and the message format of the old service system is different from that of the new service system. Of course, in practical applications, the first message format and the second message format are not limited to the two formats, and may be other formats, and those skilled in the art may select the formats according to practical situations.
In a specific embodiment, the converted message is sent to the adapter, and the adapter can continue to send the corresponding system after analyzing the message.
The processing device of the batch service comprises a processor and a memory, wherein the acquiring unit, the first processing unit, the sending unit, the second processing unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the problem that batch services cannot be processed in service system migration in the prior art is solved by adjusting kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements the processing method for the batch service.
The embodiment of the invention provides a processor, which is used for running a program, wherein the processing method of the batch business is executed when the program runs.
The embodiment of the invention also provides a batch service processing platform which comprises an old service system, a new service system and a batch service processing device, wherein the batch service processing device is used for executing any one of the methods.
The platform comprises an old service system, a new service system and a batch service processing device, wherein the batch service processing device is used for executing any one of the methods, and the method divides the batch files into a first batch of files and a second batch of files according to a migration state, so that the service to be processed corresponding to the client data which is not migrated is processed in the old service system, the service to be processed corresponding to the migrated client data is processed in the new service system, and finally the processing result files are combined to obtain the processing result file.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, in the process of data migration of an old service system and a new service system, obtaining files of batch services to be processed to obtain batch files, wherein the batch services to be processed comprise a plurality of services to be processed;
step S102, splitting the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, and the client data corresponding to the service to be processed of the second batch of files is in the migrated state, where the un-migrated state is used to represent that the client data is not migrated from the old service system to the new service system, and the migrated state is used to represent that the client data is migrated from the old service system to the new service system;
step S103, sending the first batch of files to the old service system and sending the second batch of files to the new service system;
and step S104, receiving the result files after the batch processing of the first batch of files and the second batch of files, and merging the result files to obtain the processing result files.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, in the process of data migration of an old service system and a new service system, obtaining files of batch services to be processed to obtain batch files, wherein the batch services to be processed comprise a plurality of services to be processed;
step S102, splitting the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, and the client data corresponding to the service to be processed of the second batch of files is in the migrated state, where the un-migrated state is used to represent that the client data is not migrated from the old service system to the new service system, and the migrated state is used to represent that the client data is migrated from the old service system to the new service system;
step S103, sending the first batch of files to the old service system and sending the second batch of files to the new service system;
and step S104, receiving the result files after the batch processing of the first batch of files and the second batch of files, and merging the result files to obtain the processing result files.
In order to make the technical solutions of the present application more clearly understood and more obvious to those skilled in the art, the following description is given with reference to specific embodiments:
examples
Fig. 10 shows a flowchart of the processing method of the batch service, and the processing method includes the following steps:
1) splitting a file: downloading the file to the local according to a file dfs path sent by the batch gateway, calling a corresponding task to execute file splitting, positioning and inquiring the migration state of a client according to the client routing, splitting the file into two files sent to a new service system and an old service system, recalculating file header information, uploading dfs after the splitting is completed, and returning an execution result to the batch gateway.
2) And (3) file merging: and downloading two result files returned by the new and old service systems to the local according to a file dfs path sent by the batch gateway, calling corresponding task execution files for merging, recalculating file header information and file tail information, merging, uploading dfs after files are uploaded, and returning execution results to the batch gateway.
3) Extracting flowing water: and after receiving the batch gateway running water extraction request, directly returning, starting another thread to execute a running water extraction task, inquiring the online running water meter according to the time information sent by the gateway, writing the inquiry result into a local file (paging inquiry and multiple times of writing), uploading dfs to the file after the writing is finished, and notifying the batch gateway of the processing result after the task is successfully executed or fails to be processed.
4) Batch configuration: the function realizes different processing according to the configured corresponding relation between the transaction and the operation, the failure retry times and other information.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
From the above description, it can be seen that the above-described embodiments of the present application achieve the following technical effects:
1) the method for processing the batch services comprises the steps of firstly, acquiring files of the batch services to be processed in the data migration process of an old service system and a new service system to obtain the batch files, wherein the batch services to be processed comprise a plurality of services to be processed; then, splitting the batch files into a first batch of files and a second batch of files according to a migration state, wherein the migration state at least comprises an un-migrated state and a migrated state, client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, the client data corresponding to the service to be processed of the second batch of files is in the migrated state, the un-migrated state is used for representing that the client data is not migrated from the old service system to the new service system, and the migrated state is used for representing that the client data is migrated from the old service system to the new service system; then, the first batch of files are sent to the old service system, and the second batch of files are sent to the new service system; and finally, receiving the result files after the batch processing of the first batch of files and the second batch of files, and merging the result files to obtain the processing result files. According to the method, the batch files are split into the first batch files and the second batch files according to the migration state, the service to be processed corresponding to the client data which is not migrated is processed in the old service system, the service to be processed corresponding to the migrated client data is processed in the new service system, and finally the result files obtained through processing are combined to obtain the processing result file.
2) The device for processing the batch services comprises an acquisition unit, a first processing unit, a sending unit and a second processing unit, wherein the acquisition unit is used for acquiring files of the batch services to be processed in the process of data migration of an old service system and a new service system to obtain the batch files, and the batch services to be processed comprise a plurality of services to be processed; the first processing unit is configured to split the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, the client data corresponding to the service to be processed of the second batch of files is in the migrated state, the un-migrated state is used to indicate that the client data is not migrated from the old service system to the new service system, and the migrated state is used to indicate that the client data is migrated from the old service system to the new service system; the sending unit is used for sending the first batch of files to the old service system and sending the second batch of files to the new service system; the second processing unit is used for receiving the result files after the first batch of files and the second batch of files are processed in batches and combining the result files to obtain the processing result files. The device splits the batch files into a first batch of files and a second batch of files according to the migration state, so that the to-be-processed service corresponding to the client data which is not migrated is processed in the old service system, the to-be-processed service corresponding to the migrated client data is processed in the new service system, and finally the processed result files are combined to obtain the processed result file.
3) The batch service processing platform comprises an old service system, a new service system and a batch service processing device, wherein the batch service processing device is used for executing any one of the methods, the method divides the batch files into a first batch of files and a second batch of files according to a migration state, so that the service to be processed corresponding to the client data which is not migrated is processed in the old service system, the service to be processed corresponding to the migrated client data is processed in the new service system, and finally the processed result files are combined to obtain a processing result file.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for processing a batch of traffic, comprising:
in the process of data migration of an old service system and a new service system, obtaining files of batch services to be processed to obtain batch files, wherein the batch services to be processed comprise a plurality of services to be processed;
splitting the batch files into a first batch of files and a second batch of files according to a migration state, wherein the migration state at least comprises a non-migration state and a migration state, client data corresponding to-be-processed services of the first batch of files is in the non-migration state, the client data corresponding to-be-processed services of the second batch of files is in the migration state, the non-migration state is used for representing that the client data is not migrated from the old service system to the new service system, and the migration state is used for representing that the client data is migrated from the old service system to the new service system;
sending the first batch of files to the old service system, and sending the second batch of files to the new service system;
and receiving the result files after the first batch of files and the second batch of files are processed in batch and combining the result files to obtain the processed result files.
2. The method of claim 1, wherein receiving the result file after the batch processing of the first batch of files and the second batch of files comprises:
under the condition of receiving a result file of the service to be processed, generating a result file message, wherein the result file message corresponds to the result file one to one;
updating a corresponding flow record according to the result file message, so that processing state information of the flow record is updated to a processed state from a processing state, the processing state is used for representing that the service to be processed is in processing, the processed state is used for representing that the service to be processed is processed completely, and the flow record is generated by the old service system or the new service system according to the service to be processed;
inquiring the flow record of the service to be processed to generate a flow extraction file, wherein the flow extraction file at least comprises processing state information of the flow record;
and determining to receive result files of all the services to be processed under the condition that all the processing state information in the stream extraction file is the processed state.
3. The method of claim 2, wherein receiving the result file after the batch processing of the first batch of files and the second batch of files further comprises:
and generating alarm information under the condition that the processing state information of the processing state exists in the stream extraction file when the stream record of the service to be processed is inquired for a preset number of times.
4. The method of claim 1, wherein the migration status further includes an absent status and a locked status, the absent status is used to characterize a migration status of the client data that is not queried for the pending service, the locked status is used to characterize that the client data of the pending service is being migrated, and the batch file is split into a first batch file and a second batch file according to the migration status, further comprising:
and synthesizing first service files in the batch files into the first batch files, and synthesizing second service files in the batch files into the second batch files, wherein the first service files are files of the to-be-processed service with the client data in the non-existing state or the non-migrated state, and the second service files are files of the to-be-processed service with the client data in the locked state or the migrated state.
5. The method of claim 1, wherein prior to splitting the bulk file into a first bulk file and a second bulk file according to a migration state, the method further comprises:
inquiring the migration state of target customer data, wherein the target customer data is customer data corresponding to the service to be processed;
marking the target customer data in the non-migration state, so that the marked target customer data is not subjected to migration processing before the batch business processing to be processed is completed.
6. The method of claim 1, wherein prior to sending the first batch of files to the old business system and the second batch of files to the new business system, the method further comprises:
and converting the formats of the first batch of files into a first message format, and converting the formats of the second batch of files into a second message format, wherein the first message format is a message format supported by the old service system, and the second message format is a message format supported by the new service system.
7. An apparatus for processing a batch of traffic, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring files of batch services to be processed in the process of data migration of an old service system and a new service system to obtain the batch files, and the batch services to be processed comprise a plurality of services to be processed;
a first processing unit, configured to split the batch files into a first batch of files and a second batch of files according to a migration state, where the migration state at least includes an un-migrated state and a migrated state, where client data corresponding to a service to be processed of the first batch of files is in the un-migrated state, and the client data corresponding to the service to be processed of the second batch of files is in the migrated state, where the un-migrated state is used to represent that the client data is not migrated from the old service system to the new service system, and the migrated state is used to represent that the client data is migrated from the old service system to the new service system;
a sending unit, configured to send the first batch of files to the old service system, and send the second batch of files to the new service system;
and the second processing unit is used for receiving the result files after the first batch of files and the second batch of files are processed in batches and combining the result files to obtain the processing result files.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program performs the method of any one of claims 1 to 6.
9. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 6.
10. A batch service processing platform comprising an old service system, a new service system and a processing means of a batch service, characterized in that the processing means of the batch service is adapted to perform the method of any of claims 1 to 6.
CN202111633593.1A 2021-12-28 2021-12-28 Batch service processing method and device and batch service processing platform Pending CN114298830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111633593.1A CN114298830A (en) 2021-12-28 2021-12-28 Batch service processing method and device and batch service processing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633593.1A CN114298830A (en) 2021-12-28 2021-12-28 Batch service processing method and device and batch service processing platform

Publications (1)

Publication Number Publication Date
CN114298830A true CN114298830A (en) 2022-04-08

Family

ID=80972322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633593.1A Pending CN114298830A (en) 2021-12-28 2021-12-28 Batch service processing method and device and batch service processing platform

Country Status (1)

Country Link
CN (1) CN114298830A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900531A (en) * 2022-04-29 2022-08-12 中国工商银行股份有限公司 Data synchronization method, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900531A (en) * 2022-04-29 2022-08-12 中国工商银行股份有限公司 Data synchronization method, device and system
CN114900531B (en) * 2022-04-29 2024-02-27 中国工商银行股份有限公司 Data synchronization method, device and system

Similar Documents

Publication Publication Date Title
CN109831494B (en) User data management method and device
US8380820B1 (en) Sending synchronous responses to requests from frontend applications
US9002804B2 (en) Automatically generating a customer notification file status report
CN111580884A (en) Configuration updating method and device, server and electronic equipment
CN111382008B (en) Virtual machine data backup method, device and system
CN111277639A (en) Method and device for maintaining data consistency
CN113938522B (en) Event message transmission method, system, device and computer storage medium
CN112329001B (en) Data distribution method, system, terminal and medium between internal network and external network
CN111988418B (en) Data processing method, device, equipment and computer readable storage medium
CN114298830A (en) Batch service processing method and device and batch service processing platform
CN111031135A (en) Message transmission method and device and electronic equipment
CN112437155A (en) Service data processing method and device and server equipment
CN109670952B (en) Collecting and paying transaction platform
CN114679458B (en) Privatized deployment method and device applicable to multiple clouds
CN114885020A (en) Data transmission system and method
CN111225117B (en) Reminding message issuing method and device
CN112416980A (en) Data service processing method, device and equipment
CN113157405A (en) Method and device for retrying breakpoint of business process
CN111324654A (en) Interface calling method, system, computer device and computer readable storage medium
CN112925686A (en) Data acquisition method, server, system and storage medium
CN113285997B (en) Data processing method, device, medium and product based on heterogeneous system
CN116225729A (en) Method, system, equipment and storage medium for data processing
CN115098528B (en) Service processing method, device, electronic equipment and computer readable storage medium
CN112860746B (en) Cache reduction-based method, equipment and system
CN115393061A (en) File processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination