CN113626510B - Transaction verification method, device, electronic equipment and storage medium - Google Patents

Transaction verification method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113626510B
CN113626510B CN202110919547.1A CN202110919547A CN113626510B CN 113626510 B CN113626510 B CN 113626510B CN 202110919547 A CN202110919547 A CN 202110919547A CN 113626510 B CN113626510 B CN 113626510B
Authority
CN
China
Prior art keywords
transaction
record
check
target
records
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110919547.1A
Other languages
Chinese (zh)
Other versions
CN113626510A (en
Inventor
王党团
张宇
盛沛
郭慧杰
钱丽雯
肖相如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202110919547.1A priority Critical patent/CN113626510B/en
Publication of CN113626510A publication Critical patent/CN113626510A/en
Application granted granted Critical
Publication of CN113626510B publication Critical patent/CN113626510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a transaction checking method, a transaction checking device, electronic equipment and a storage medium, wherein a transaction information checking table is firstly configured; then loading first transaction data of a first system, and generating corresponding first check records in a transaction information check list for one first transaction record, wherein the first check records are inserted into a main key, a first system mark and a first record compression file of the first transaction record; and loading second transaction data of the second system, and generating corresponding second check records in the transaction information check list for one second transaction record, wherein the second check records are inserted into the primary key, the second system mark and the second record compression file of the second transaction record. Since the first collation record and the second collation record which are the same in the main key are merged into one piece, the collation record which is not merged can be identified as abnormal, and thus an abnormal file is derived.

Description

Transaction verification method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of software technologies, and in particular, to a transaction verification method, a transaction verification device, an electronic device, and a storage medium.
Background
As the last barrier of transaction security among cross systems, the consistency of transaction information of both parties is checked, and an error file is output as an important certificate of reconciliation and supplementary recording, so that the fund security of banks and clients is protected.
In the traditional transaction checking mode, transaction data of two systems are respectively loaded into two data tables, and then a primary key and an index of the two data tables are created. And then, remotely acquiring a batch of records of the first data table according to the primary key sequence, remotely acquiring a batch of records of the second data table, checking the primary key and other auxiliary fields, and outputting the primary key and other auxiliary fields to the abnormal table if the primary key and the auxiliary fields are inconsistent. And (3) circularly processing all records of the two tables, and finally outputting abnormal records to the file.
Obviously, when facing large-scale data processing, the efficiency of each link of loading, indexing, recording batch processing, recording batch checking and recording batch output is extremely low.
Disclosure of Invention
In view of the above, the present invention provides a transaction verification method, a device, an electronic apparatus and a storage medium, which have the following technical scheme:
in one aspect, the present invention provides a transaction verification method, the method comprising:
The method comprises the steps of obtaining a configured transaction information check list, wherein a plurality of columns in the transaction information check list correspond to a main key, a first system mark, a first record compression file, a second system mark and a second record compression file respectively;
loading first transaction data of a first system, inserting the first transaction data into the transaction information check table, wherein first transaction records of the first transaction data are in one-to-one correspondence with first check records of the transaction information check table, and the first check records are inserted into a main key, a first system mark and a first record compression file of the corresponding first transaction records;
loading second transaction data of a second system, inserting the second transaction data into the transaction information check table, wherein the second transaction records of the second transaction data are in one-to-one correspondence with the second check records of the transaction information check table, the second check records are inserted into a main key, a second system mark and a second record compression file of the corresponding second transaction records, and a group of first check records and second check records with the same main key are merged into one check record;
and exporting an abnormal file according to the non-merged check record in the transaction information check table, wherein the abnormal file can indicate the transaction record corresponding to the non-merged check record.
Preferably, the loading the first transaction data of the first system, inserting the first transaction data into the transaction information check table includes:
generating a plurality of first load jobs, the plurality of first load jobs being processed in distributed parallel to:
multitasking blocks read the first transaction data;
analyzing key fields of a target first transaction record read in the first transaction data currently to be spliced into a primary key;
setting a first system mark of the target first transaction record;
compressing the target first transaction record to obtain a first record compressed file;
and distributing corresponding target first check records for the target first transaction records in the transaction information check list, and inserting the primary key, the first system mark and the first record compression file of the target first transaction records into the target first check records.
Preferably, the loading the second transaction data of the second system, inserting the second transaction data into the transaction information check table includes:
generating a plurality of second load jobs, the plurality of second load jobs being processed in distributed parallel to achieve:
Multitasking blocks read the second transaction data;
analyzing key fields of a target second transaction record read in the second transaction data currently to be spliced into a primary key;
setting a second system mark of the target second transaction record;
compressing the target second transaction record to obtain a second record compressed file;
and distributing a target second check record for the target second transaction record in the transaction information check table, and inserting a primary key, a second system mark and a second record compression file of the target second transaction record into the target second check record.
Preferably, the deriving the abnormal file according to the check record which is not merged in the transaction information check table includes:
generating a plurality of export jobs, the plurality of export jobs being processed in distributed parallel to achieve:
multitasking reads the transaction information check list;
judging whether a first system mark and a second system mark are inserted into the target check record read in the transaction information check table at present;
if not, determining the target check record as an abnormal check record, decompressing the first/second record compressed file in which the target check record is inserted, and writing the decompression result and the first/second system mark in which the target check record is inserted into the abnormal file.
Another aspect of the present invention provides a transaction verification apparatus, the apparatus comprising:
the configuration module is used for acquiring a configured transaction information check list, and a plurality of columns in the transaction information check list correspond to the primary key, the first system mark, the first record compression file, the second system mark and the second record compression file respectively;
the loading module is used for loading first transaction data of a first system, inserting the first transaction data into the transaction information check table, wherein first transaction records of the first transaction data are in one-to-one correspondence with first check records of the transaction information check table, and the first check records are inserted into a main key, a first system mark and a first record compression file of the corresponding first transaction records; loading second transaction data of a second system, inserting the second transaction data into the transaction information check table, wherein the second transaction records of the second transaction data are in one-to-one correspondence with the second check records of the transaction information check table, the second check records are inserted into a main key, a second system mark and a second record compression file of the corresponding second transaction records, and a group of first check records and second check records with the same main key are merged into one check record;
And the export module is used for exporting an abnormal file according to the non-merged check record in the transaction information check table, wherein the abnormal file can indicate the transaction record corresponding to the non-merged check record.
Preferably, the loading module is configured to load first transaction data of a first system, insert the first transaction data into the transaction information check table, and specifically is configured to:
generating a plurality of first load jobs, the plurality of first load jobs being processed in distributed parallel to:
multitasking blocks read the first transaction data; analyzing key fields of a target first transaction record read in the first transaction data currently to be spliced into a primary key; setting a first system mark of the target first transaction record; compressing the target first transaction record to obtain a first record compressed file; and distributing corresponding target first check records for the target first transaction records in the transaction information check list, and inserting the primary key, the first system mark and the first record compression file of the target first transaction records into the target first check records.
Preferably, the loading module is configured to load second transaction data of a second system, and insert the second transaction data into the transaction information check table, and is specifically configured to:
generating a plurality of second load jobs, the plurality of second load jobs being processed in distributed parallel to achieve:
multitasking blocks read the second transaction data; analyzing key fields of a target second transaction record read in the second transaction data currently to be spliced into a primary key; setting a second system mark of the target second transaction record; compressing the target second transaction record to obtain a second record compressed file; and distributing a target second check record for the target second transaction record in the transaction information check table, and inserting a primary key, a second system mark and a second record compression file of the target second transaction record into the target second check record.
Preferably, the export module is configured to export an abnormal file according to a check record that is not merged in the transaction information check table, specifically:
generating a plurality of export jobs, the plurality of export jobs being processed in distributed parallel to achieve:
Multitasking reads the transaction information check list; judging whether a first system mark and a second system mark are inserted into the target check record read in the transaction information check table at present; if not, determining the target check record as an abnormal check record, decompressing the first/second record compressed file in which the target check record is inserted, and writing the decompression result and the first/second system mark in which the target check record is inserted into the abnormal file.
Another aspect of the present invention provides an electronic device including: at least one memory and at least one processor; the memory stores a program, and the processor calls the program stored in the memory, and the program is used for realizing the transaction verification method.
Another aspect of the invention provides a storage medium having stored therein computer-executable instructions for performing the transaction verification method.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a transaction checking method, a transaction checking device, electronic equipment and a storage medium, wherein a transaction information checking table is firstly configured; then loading first transaction data of a first system, and generating corresponding first check records in a transaction information check list for one first transaction record, wherein the first check records are inserted into a main key, a first system mark and a first record compression file of the first transaction record; and loading second transaction data of the second system, and generating corresponding second check records in the transaction information check list for one second transaction record, wherein the second check records are inserted into the primary key, the second system mark and the second record compression file of the second transaction record. Since the first collation record and the second collation record which are the same in the main key are merged into one piece, the collation record which is not merged can be identified as abnormal, and thus an abnormal file is derived.
The invention designs a single table (transaction information check table) of the main key to install double numbers (transaction data), and automatically completes transaction record check while data is loaded efficiently, thereby avoiding that the main key and the index are independently created after the data is loaded into the data table in the traditional mode, and the addition, deletion and modification of the data table columns and rows are uniformly simplified into the insertion operation, so that the slow mass processing or row migration caused by the update of the traditional database can be avoided. In addition, the whole data volume is greatly reduced by the design of full record data storage compression, only the minimum set of effective data is streamed and processed in the transmission, storage and access of data, the inherent disk performance bottleneck and network bandwidth bottleneck of mass data are effectively solved, and the whole performance is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of a big data cloud computing overall architecture provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a transaction verification method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a transaction information check table according to an embodiment of the present invention;
FIG. 4 is a data structure configuration example of a primary key according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a full-flow operation provided in an embodiment of the present invention;
FIG. 6 is a flowchart of distributed parallel processing for A-job provided in an embodiment of the present invention;
FIG. 7 is a flowchart of a distributed parallel processing of B-job according to an embodiment of the present invention;
FIG. 8 is a flowchart of distributed parallel processing for C-job provided by an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a transaction verification device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
The implementation scheme adopted by the current transaction checking system is a traditional relational database and a single structured flow processing program, mass storage of large-scale data and dynamic parallel calculation of application programs cannot be realized, and the processing performance and the expansion performance are limited, so that future business development is influenced.
According to the invention, a big data cloud platform can be used as an operation basic environment, an ibm small-sized machine is replaced by an x86 server, a aix system is replaced by a linux operating system, an oracle database is replaced by a hbase database, the existing manual procedural application is replaced by a hadoop/spark computing framework, an object-oriented language java is used for replacing a procedure-oriented language c, a new data model and a new business processing flow are used, and the transaction checking function of the existing system is covered. Through data cloud and computing cloud, the service processing capacity and the expansion capacity of the system are improved in a crossing mode, and continuous high-speed development of the service is guaranteed.
Aiming at the core business scene of the transaction checking system, the invention starts out from the actual situation of mass business data checking to analyze the problem, combines the application scene, and selects and designs a solution of the system and the application mutually matched and fused in two aspects of a system layer and an application layer, and the solution is as follows:
1) System layer:
aiming at mass data and mass calculation of the full-line online business facing the transaction checking system, a traditional and bulky AIX system is abandoned in the aspect of an operating system, a light and open LINUX system is adopted, an operating system of an x86 platform has the most widely disclosed free technical resources, a simplified lightweight operating system runs more lightly and rapidly, a second-level starting container application can be realized, and technical resources and performances are greatly improved compared with those of a closed traditional AIX system.
In the aspect of physical hosts, the invention abandons the traditional expensive dual-host mutually-standby IBM780 host, and adopts a plurality of node inexpensive X86 servers. The multi-node hot backup operation is performed, and compared with the traditional dual-machine hot backup operation, the system safety is greatly improved. The total cost of 7 PCs is not more than ten thousand yuan, which is far lower than ten million-class IBM servers, and the equipment cost is greatly reduced.
In the aspect of the database, the invention abandons the traditional double-node relational data ORACLE, adopts the NoSql distributed database HBASE, supports the dynamic expansion of the mass data and nodes of trillion rows and trillion columns based on the BIGTABLE taking columns as modes, only supports the high-speed adding, deleting and checking operation taking ROWKEY as a unique index, effectively aims at a transaction checking scene, and solves the problems of the traditional relational database, such as main key size limitation, index size number limitation, rank size limitation and incapability of supporting the storage access of the mass data.
In the aspect of data calculation, the invention abandons the traditional operating system level computing capability, adopts a distributed cloud computing framework MAPREDUCE/SPARK, integrates the computing capability of a plurality of machines into a whole based on a container technology and a memory technology, dynamically allocates computing resources as required, meets the super computing capability requirement under mass data and node dynamic expansion, and solves the limitation of the traditional single computing capability.
In the aspect of file storage, the invention abandons the traditional file storage management capability depending on an operating system, adopts a distributed cloud storage framework HADOOP to carry out file storage management, stores data files in multiple nodes in a fragmented manner, supports the high-speed storage access and node dynamic expansion of PB-level mass data files, and solves the limit of the size and capacity of the traditional file system.
See the big data cloud computing whole architecture diagram shown in fig. 1. Based on the design of the system layer, the invention divides the whole architecture into a first data transmission layer, a second data storage layer, a third data management layer, a fourth computing framework layer and a sixth service scheduling layer. In addition, a seventh auxiliary tool layer (not shown in fig. 1) may be provided. The design of each layer is as follows:
And the data transmission layer acquires daily transaction data files of a plurality of online service systems by a server or client mode and uploads the daily transaction data files to the cloud storage platform HDFS. Four schemes are chosen to realize the function:
1. the old interfaces are kept unchanged, FTP SERVER services provided by the system are still adopted, transaction data files of an external online system are received, and then the files are uploaded to the cloud storage platform through script commands.
2. The assembly encapsulates FTP SERVER functions and uploading cloud storage platform functions by adopting an open-source HDFS-OVER-FTP, and an external online system can directly upload transaction data files to the cloud storage platform by FTP put, so that the method is a combination of the first scheme.
3. The component encapsulates FTP CLIENT functions and uploading cloud storage platform functions by adopting the DATA-X with an open source, and actively transacts DATA files from an external online system Get and then uploads the DATA files to the cloud storage platform. The scheme needs to change the existing mode, the passive receiving file is changed into active acquisition, and the service party is changed into the customer service party.
4. And the function of directly uploading the data stream TO the cloud storage platform without falling TO the ground is similar TO that of the second scheme by adopting the independently developed FTP-TO-HDFS, reconstructing the FTP source code.
The data storage layer adopts a main stream open source distributed big data cloud storage assembly HDFS, supports high-speed storage and access of mass data, and supports dynamic expansion and version upgrading of on-line nodes of the heat engine.
The data management layer adopts a combination of a main stream open source distributed big data NoSql database component HBASE and a traditional ORACLE. The HBASE component is a column database, supports infinite expansion of columns and rows of data records, provides real-time high-speed insertion and inquiry of mass data, and supports dynamic expansion and version upgrading of on-line nodes of the heat engine. ORACLE is a traditional relational database that remains compatible with existing necessary functions.
And a computing framework layer adopts a mainstream open source distributed cloud computing framework MAPREDUCE and SPARK component. The two components are distributed frames based on a master-slave mode, the computing capacity of a plurality of machine nodes can be dynamically started according to system resources and task conditions, and tasks can be completed through multi-machine efficient collaborative processing. MAPREDUCE is a stable distributed computing framework, SPARK is an efficient distributed computing framework which tends to be in a memory mode, provides simple and efficient multi-task parallel processing capability for application programs, and supports dynamic expansion and version upgrading of online nodes of a heat engine.
And the service scheduling layer has the main functions of accurately and efficiently scheduling batch jobs according to the conditions of time, files, parameters and the like and the dependence relationship of the constructed directed acyclic graph. To achieve this function, there are two options:
1. and a job scheduling component OOZIE of the main stream of the big data platform is adopted to provide configuration items based on XML, and the jobs are driven to run in the big data platform according to a preset relation sequence.
2. The existing system self-developed scheduling application program is adopted to complete job scheduling, so that the migration cost is low and the technology conversion cost is low.
The auxiliary tool layer mainly comprises a cloud computing scheduling tool YARN and a coordination manager ZOKEEPER. YARN dynamically schedules the computation resources of MAPREDUCE and SPARK according to CPU and MEMERY. ZOOKEEPER coordinates and manages a plurality of main and standby nodes, prevents single point of failure.
2) Application layer:
the core of the application is a data model, a simple data model, and the application cannot play a role even if the application has strong cloud computing and cloud storage capacity in the face of massive business data and computing scenes, so that the processing performance is low, and the effectiveness cannot be satisfied. The data model and the process flow surrounding the data model are design critical.
The objects, business rules and outputs involved in the transaction verification scenario are analyzed. The business object is two systems and generates the transmitted business transaction data of the day, the business rule is to search and match the transaction records of the two systems according to the key fields of the transaction, and check whether the transaction elements are consistent. The output is single-side transaction data unique to each of the two systems or transaction records with inconsistent data on both sides. Thus, the transaction verification scenario has three core keywords: "two objects", "find matches each other", "output exception".
In this regard, the data model is designed as a piece Shan Biao (i.e. transaction information check table), and transaction data (i.e. two objects) of the two-party system are simultaneously loaded and checked, and the key uses key field concatenation (i.e. mutual search matching) specified by the service. In addition, the system mark field of the transaction information check table only has one value of 1, and the recorded compressed file field stores the ZIP packet after the current system transaction data is compressed. And the process flow around the data model, i.e. the transaction verification scheme of the present invention, will be described in detail later.
With continued reference to the big data cloud computing overall architecture diagram shown in fig. 1. Based on the relation to the application layer, the invention continues to divide the whole architecture into application service layers of a fifth layer, which is designed as follows:
and the application service layer adopts a service application program developed by JAVA, calls the distributed computing framework, accesses the operation program of the cloud storage file and the cloud database file and the like. In the invention, the MAPREDUCE operation loaded for the data file and the SPARK operation output by the abnormal transaction meet the existing and future business functions.
Referring to the flowchart of the method shown in fig. 2, an embodiment of the present invention provides a transaction verification method, which is applied to an application service layer, and includes the following steps:
S10, acquiring a configured transaction information check list, wherein a plurality of columns in the transaction information check list correspond to a primary key, a first system mark, a first record compression file, a second system mark and a second record compression file respectively.
In the embodiment of the invention, a transaction information check table is configured for key fields of the two-party system, wherein the transaction information check table is configured into five columns, and the five columns of fields are a primary key, a first system mark, a first record compressed file, a second system mark and a second record compressed file in sequence. The main key adopts key field splice specified by service, the first system mark and the first record compression file are the system mark and the record compression file of the first system as checking party, the second system mark and the second record compression file are the system mark and the record compression file of the second system as checking party, the value of the first system mark and the second system mark is 1, and the first record compression file and the second record compression file are ZIP packets after the compression of a transaction record under the corresponding system.
See the schematic configuration of the transaction information check table shown in fig. 3. Where "sequence" indicates a sequence number, "NAME" indicates a NAME, "TYPE" indicates a TYPE, "null" indicates whether or not a value can be null, and "command" indicates a remark. Taking serial number 0 as an example for illustration, the record in the table is configured for the first column of "primary key", the name of the primary key is "ROWKEY", the type is "byte", the length is "100" bytes, and the value can be null.
See the data structure configuration example of the primary key shown in fig. 4. The key fields specified by the service are configured as a serial number, a card number, an amount, a terminal number and a transaction code, and the data structure of the main key is a splicing structure of the key fields of the serial number, the card number, the amount, the terminal number and the transaction code. Taking serial number 0 as an example for illustration, the record in the table is configured for a "serial number", the name of the serial number is "TZFREN", the type is "byte", the length is "12" bytes, and the value cannot be null.
S20, loading first transaction data of a first system, inserting the first transaction data into a transaction information check table, wherein first transaction records of the first transaction data correspond to first check records of the transaction information check table one by one, and the first check records are inserted into a primary key, a first system mark and a first record compression file of the corresponding first transaction records.
In the embodiment of the invention, the transaction records of the transaction data (i.e. the first transaction data) of the first system are loaded one by one, and each transaction record is inserted into the transaction information check table to generate a check record (i.e. the first check record). The primary key, the first system mark and the first record compression file corresponding to the corresponding transaction record are inserted into the first check record, and the primary key shown in fig. 4 is taken as an example continuously, so that the inserted primary key in the first check record is the splicing result of the transaction elements of the key fields of serial number, card number, amount, terminal number and transaction code in the transaction record, the first system mark inserted into the first check record is 1, and the first record compression file inserted into the first check record is the ZIP packet after the transaction record is compressed.
In the specific implementation process, when transaction data of the systems of the two parties are checked are loaded, a MapReduce distributed computing framework can be directly used, and each transaction record is read, namely, a plurality of required transaction elements of key fields are analyzed from a large number of fields and spliced into a primary key, then the whole transaction record is compressed into a ZIP packet to be inserted into a transaction information check table, so that a check record is generated, and the corresponding system mark is assigned as '1'.
Thus, step S20 "load the first transaction data of the first system, and insert the first transaction data into the transaction information check table" in the embodiment of the present invention may be the following steps:
generating a plurality of first load jobs, the plurality of first load jobs being processed in distributed parallel to implement:
multitasking blocks read first transaction data; for a target first transaction record read in the first transaction data at present, analyzing key fields of the target first transaction record and splicing the key fields into a main key; setting a first system mark of a target first transaction record; compressing the target first transaction record to obtain a first record compressed file; and distributing corresponding target first check records for the target first transaction records in the transaction information check list, and inserting the primary key, the first system mark and the first record compression file of the target first transaction records into the target first check records.
See the full flow job schematic shown in fig. 5. The first system is a system A, the second system is a system B, and for the processing of the whole transaction checking flow, the transaction checking system submits three batches of jobs, including two batches of loading jobs and one batch of exporting jobs. Specifically, after the A-B transaction verification scene begins, the batch loading operation of the A system, namely the A operation finishes the transaction data loading of the A system; furthermore, the batch loading operation of the B system, namely the B operation finishes the transaction data loading of the B system; and finally, exporting the jobs in batches, namely, completing the output of the abnormal files of the A system and the B system by the C job.
See the a-job distributed parallel processing flowchart shown in fig. 6. Continuing to explain by taking the first system as an A system, generating a batch of A jobs in the process of inserting transaction data of the loading A system into a transaction information check table, and performing distributed parallel processing on the batch of A jobs to realize:
multitasking block reading transaction data of the A system; analyzing a key field of a transaction record read currently, and splicing transaction elements of the key field into a primary key; assigning a value of "1" to the A-system flag of the transaction record; compressing the transaction record to obtain a ZIP format data packet; a check record is allocated to the transaction record, and a main key, an A system mark and a ZIP format data packet of the transaction record are inserted into the check record.
That is, an a job corresponds to a task, and the task corresponds to a block of transaction data of the a system, the block is composed of a plurality of transaction records, so that the a job needs to sequentially read the plurality of transaction records under the corresponding block, and for the transaction record currently read by the a job, the primary key of the transaction record, the a system flag and the ZIP format packet are inserted into a check record.
S30, loading second transaction data of a second system, inserting the second transaction data into a transaction information check table, wherein second transaction records of the second transaction data correspond to second check records of the transaction information check table one by one, the second check records are inserted into a main key, a second system mark and a second record compression file of the corresponding second transaction records, and a group of first check records and second check records with the same main key are merged into one check record.
In the embodiment of the invention, the transaction records of the second system (namely, the second transaction data) are loaded one by one, and each transaction record is inserted into the transaction information check table to generate a check record (namely, the second check record). And the main key inserted in the second check record is the splicing result of the transaction elements of the key fields of serial number, card number, amount, terminal number and transaction code in the transaction record, the second system mark inserted in the second check record is 1, and the second record compressed file inserted in the second check record is the ZIP packet after the transaction record is compressed. And for the first check record inserted in the transaction check list, merging a first check record and a second check record with the same main key, i.e. the merged check record is inserted into the main key, the first system mark, the first record compression file, the second system mark and the second record compression file.
In the specific implementation process, when transaction data of the systems of the two parties are checked are loaded, a MapReduce distributed computing framework can be directly used, and each transaction record is read, namely, a plurality of required transaction elements of key fields are analyzed from a large number of fields and spliced into a primary key, then the whole transaction record is compressed into a ZIP packet to be inserted into a transaction information check table, so that a check record is generated, and the corresponding system mark is assigned as '1'.
Thus, step S30 "load the second transaction data of the second system, and insert the second transaction data into the transaction information check table" in the embodiment of the present invention may be the following steps:
generating a plurality of second load jobs, the plurality of second load jobs being processed in distributed parallel to implement:
multitasking blocks read the second transaction data; for a target second transaction record read in the second transaction data at present, analyzing key fields of the target second transaction record and splicing the key fields into a main key; setting a second system mark of a target second transaction record; compressing the target second transaction record to obtain a second record compressed file; and distributing a target second check record for the target second transaction record in the transaction information check table, and inserting the primary key, the second system mark and the second record compression file of the target second transaction record into the target second check record.
As shown in FIG. 5, the first system is the A system, the second system is the B system, and the transaction data loading of the B system is completed by the batch of B jobs. See the B-job distributed parallel processing flowchart shown in fig. 7. In the process of loading transaction data of the B system into the transaction information check list, generating batch B jobs, and carrying out distributed parallel processing on the batch B jobs to realize the following steps:
multitasking block reading transaction data of the B system; analyzing a key field of a transaction record read currently, and splicing transaction elements of the key field into a primary key; assigning a "1" to the B-system flag of the transaction record; compressing the transaction record to obtain a ZIP format data packet; a check record is allocated to the transaction record, and a main key, a B system mark and a ZIP format data packet of the transaction record are inserted into the check record.
That is, a B job corresponds to a task, and the task corresponds to a block of transaction data of the B system, the block is composed of a plurality of transaction records, so that a B job needs to sequentially read the plurality of transaction records under the corresponding block, and for the transaction record currently read by the B job, the primary key of the transaction record, the B system flag, and the ZIP format packet are inserted into a check record.
In addition, when allocating the collation record to the transaction record of the B system, it is necessary to determine whether or not the transaction collation table has the collation record allocated to the transaction record of the a system which is the same as the primary key of the transaction record; if so, inserting the B system mark and ZIP format data packet of the transaction record of the B system into the existing check record; if not, a new check record is allocated to the transaction record of the B system, and then the main key of the transaction record of the B system, the B system mark and the ZIP format data packet are inserted into the newly allocated check record.
S40, an abnormal file is derived according to the check records which are not merged in the transaction information check table, and transaction records corresponding to the check records which are not merged can be indicated in the abnormal file.
In the embodiment of the invention, for the same transaction, the transaction elements of the first system and the second system under the same key field are the same, the primary key is the same, the related check records in the transaction information check table can be merged, and the five fields are not empty.
Otherwise, if the first system mark and the first record compressed file of a certain check record in the transaction information check table are not empty and the second system mark and the second record compressed file are empty, the transaction record corresponding to the second record compressed file is indicated to be single-side transaction data unique to the first system; similarly, if the first system mark and the first record compressed file of a check record in the transaction information check table are empty and the second system mark and the second record compressed file are not empty, the transaction record corresponding to the second record compressed file is indicated to be single-side transaction data unique to the second system. At this time, an abnormal file needs to be output.
In the specific implementation process, when the transaction verification abnormal file is output, a Spark distributed computing framework can be used for parallelly reading the verification records in the transaction information verification table, and if one verification record is read, whether the content of the verification record under 5 fields has a value is judged. If yes, the check record is normal, the transaction is normal, and the skip is not processed; otherwise, if not, the check record is abnormal, the check record is read, the decompression operation is carried out on the inserted first/second record compression file, the original first/second transaction record is restored, and the inserted first/second system mark and the first/second transaction record are written into the abnormal file.
Thus, step S30 "export an abnormal file from the check record that is not merged in the transaction information check table" may employ the following steps:
generating a plurality of export jobs, the plurality of export jobs being processed in distributed parallel to implement:
multitasking reading a transaction information check list; judging whether the first system mark and the second system mark are inserted into the target check record read in the transaction information check table at present; if not, determining the target check record as an abnormal check record, decompressing the first/second record compressed file in which the target check record is inserted, and writing the decompression result and the first/second system mark in which the target check record is inserted into the abnormal file.
As shown in FIG. 5, the first system is the A system, the second system is the B system, and the abnormal file output of the A system and the B system is completed by the batch C operation. See the C-job distributed parallel processing flowchart shown in fig. 8. In the process of exporting the abnormal file, generating a batch of C jobs, and performing distributed parallel processing on the batch of C jobs to realize:
multitasking reading a transaction information check list; judging whether an A system mark and a B system mark are inserted into a check record read currently; if yes, determining the check record as a normal check record, skipping, and continuing to execute the reading operation; if not, determining the check record as an abnormal check record, decompressing the record compression file (which is the record compression file of the A system or the record compression file of the B system) into which the check record is inserted, recovering the record into the transaction record of the original format (the transaction record of the A system or the transaction record of the B system), and writing the system mark (the A system mark or the B system mark) into which the check record is inserted and the transaction record of the original format into the abnormal file.
The record compression file and the system flag inserted in the anomaly check record belong to the same system, either the a system or the B system, based on the descriptions in steps S20 and S30.
In summary, the invention has the following advantages:
1) Cloud storage migration: the data storage is transferred to the big data cloud storage platform by the file management of the original small computer open platform operation system, and the data is expanded from single computers to multiple computers, so that the size limitation of single computer data files is avoided, the storage of massive service data is completely supported, and the storage bottleneck is effectively solved.
2) Cloud computing migration: the data calculation is transferred to the distributed cloud computing platform by the original single-node processing, the computing resources are expanded from single machine to multi-machine parallel computing, the limitation of the single-node computing resources is avoided, the dynamic computing resource expansion and contraction are supported, and the computing bottleneck is effectively solved.
3) Double-meter integrated design: the characteristics of the NoSql database are fully utilized, the ROWKEY is naturally used as a primary key and an index, the primary key and the index are not required to be independently created after the data is loaded into the data table like the traditional data, the addition, deletion and modification of the columns and the rows of the data table are unified and simplified into an insertion and addition operation, and the mass processing is slow or the row migration is caused like the updating of a determinant like the traditional database.
The characteristics are fully researched and utilized, a traditional data model with double-table independent design is subverted, a single-table loading double-number design with key field splicing as a main key is used, the transaction record information is automatically checked by the system while the data is loaded efficiently, and the problems that the complexity performance of the traditional database loading process is slow and the application program is performed slowly by hand in a query check mode are avoided. The unique model and the simplified flow are adopted, and the processing performance is greatly improved.
4) Model simplifying design: the novel five-field data model design aiming at the targeting only comprises necessary key fields, and is hidden by using the fields in non-real time, so that the analysis fields of the data file are greatly reduced, and the loading, accessing, calculating and outputting speeds are greatly improved.
5) Data compression design: the innovative full-record data storage compression design reduces the overall data size by more than 80%, and only the minimum set of effective data flows and processes in the transmission, storage and access of data, so that the inherent disk performance bottleneck and network bandwidth bottleneck of mass data are effectively solved, and the overall performance is greatly improved.
6) Brand new programming framework: mapReduce operation and Spark batch distributed operation which are all written by java language are used, and the same function points are completed by simple hundreds of lines of codes.
7) Prototype verification results: prototype experiments prove that the traditional product can be completely migrated to a big data cloud computing platform, and has larger development advantages, performance advantages and price advantages than a small computer platform.
Based on the transaction verification method provided by the above embodiment, the embodiment of the present invention correspondingly provides an apparatus for executing the transaction verification method, where a schematic structural diagram of the apparatus is shown in fig. 9, and the apparatus includes:
the configuration module 10 is configured to obtain a configured transaction information check table, where a plurality of columns in the transaction information check table correspond to the primary key, the first system mark, the first record compression file, the second system mark and the second record compression file respectively;
the loading module 20 is configured to load first transaction data of the first system, insert the first transaction data into the transaction information check table, wherein a first transaction record of the first transaction data corresponds to a first check record of the transaction information check table one by one, and the first check record is inserted into a primary key, a first system mark and a first record compression file of the corresponding first transaction record; loading second transaction data of a second system, inserting the second transaction data into a transaction information check table, wherein the second transaction records of the second transaction data are in one-to-one correspondence with second check records of the transaction information check table, the second check records are inserted into a main key, a second system mark and a second record compression file of the corresponding second transaction records, and a group of first check records and second check records with the same main key are merged into one check record;
The export module 30 is configured to export an exception file according to the non-merged check record in the transaction information check table, where the exception file can indicate a transaction record corresponding to the non-merged check record.
Optionally, the loading module 20 is configured to load the first transaction data of the first system, insert the first transaction data into the transaction information check table, and specifically is configured to:
generating a plurality of first load jobs, the plurality of first load jobs being processed in distributed parallel to implement:
multitasking blocks read first transaction data; for a target first transaction record read in the first transaction data at present, analyzing key fields of the target first transaction record and splicing the key fields into a main key; setting a first system mark of a target first transaction record; compressing the target first transaction record to obtain a first record compressed file; and distributing corresponding target first check records for the target first transaction records in the transaction information check list, and inserting the primary key, the first system mark and the first record compression file of the target first transaction records into the target first check records.
Optionally, the loading module 20 is configured to load the second transaction data of the second system, and insert the second transaction data into the transaction information check table, and is specifically configured to:
Generating a plurality of second load jobs, the plurality of second load jobs being processed in distributed parallel to implement:
multitasking blocks read the second transaction data; for a target second transaction record read in the second transaction data at present, analyzing key fields of the target second transaction record and splicing the key fields into a main key; setting a second system mark of a target second transaction record; compressing the target second transaction record to obtain a second record compressed file; and distributing a target second check record for the target second transaction record in the transaction information check table, and inserting the primary key, the second system mark and the second record compression file of the target second transaction record into the target second check record.
Optionally, the export module 30 is configured to export the abnormal file according to the check record that is not merged in the transaction information check table, specifically:
generating a plurality of export jobs, the plurality of export jobs being processed in distributed parallel to implement:
multitasking reading a transaction information check list; judging whether the first system mark and the second system mark are inserted into the target check record read in the transaction information check table at present; if not, determining the target check record as an abnormal check record, decompressing the first/second record compressed file in which the target check record is inserted, and writing the decompression result and the first/second system mark in which the target check record is inserted into the abnormal file.
It should be noted that, the refinement function of each module in the embodiment of the present invention may refer to the corresponding disclosure portion of the above transaction verification method embodiment, which is not described herein again.
Based on the transaction verification method provided by the embodiment, the embodiment of the invention correspondingly provides an electronic device, which includes: at least one memory and at least one processor; the memory stores a program, and the processor calls the program stored in the memory, and the program is used for realizing the transaction checking method.
Based on the transaction verification method provided by the embodiment, the embodiment of the invention correspondingly provides a storage medium, wherein the storage medium stores computer executable instructions for the transaction verification method.
The foregoing has described in detail a transaction verification method, apparatus, electronic device and storage medium provided by the present invention, and specific examples have been applied herein to illustrate the principles and embodiments of the present invention, the above examples being provided only to assist in understanding the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include, or is intended to include, elements inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A transaction verification method, the method comprising:
the method comprises the steps of obtaining a configured transaction information check list, wherein a plurality of columns in the transaction information check list correspond to a main key, a first system mark, a first record compression file, a second system mark and a second record compression file respectively;
loading first transaction data of a first system, inserting the first transaction data into the transaction information check table, wherein first transaction records of the first transaction data are in one-to-one correspondence with first check records of the transaction information check table, and the first check records are inserted into a main key, a first system mark and a first record compression file of the corresponding first transaction records;
Loading second transaction data of a second system, inserting the second transaction data into the transaction information check table, wherein the second transaction records of the second transaction data are in one-to-one correspondence with the second check records of the transaction information check table, the second check records are inserted into a main key, a second system mark and a second record compression file of the corresponding second transaction records, and a group of first check records and second check records with the same main key are merged into one check record;
and exporting an abnormal file according to the non-merged check record in the transaction information check table, wherein the abnormal file can indicate the transaction record corresponding to the non-merged check record.
2. The method of claim 1, wherein loading first transaction data for a first system, inserting the first transaction data into the transaction information verification table, comprises:
generating a plurality of first load jobs, the plurality of first load jobs being processed in distributed parallel to:
multitasking blocks read the first transaction data;
analyzing key fields of a target first transaction record read in the first transaction data currently to be spliced into a primary key;
Setting a first system mark of the target first transaction record;
compressing the target first transaction record to obtain a first record compressed file;
and distributing corresponding target first check records for the target first transaction records in the transaction information check list, and inserting the primary key, the first system mark and the first record compression file of the target first transaction records into the target first check records.
3. The method of claim 1, wherein loading second transaction data for a second system, inserting the second transaction data into the transaction information verification table, comprises:
generating a plurality of second load jobs, the plurality of second load jobs being processed in distributed parallel to achieve:
multitasking blocks read the second transaction data;
analyzing key fields of a target second transaction record read in the second transaction data currently to be spliced into a primary key;
setting a second system mark of the target second transaction record;
compressing the target second transaction record to obtain a second record compressed file;
and distributing a target second check record for the target second transaction record in the transaction information check table, and inserting a primary key, a second system mark and a second record compression file of the target second transaction record into the target second check record.
4. The method of claim 1, wherein the deriving an exception file from the uncombined audit record in the transaction information audit table comprises:
generating a plurality of export jobs, the plurality of export jobs being processed in distributed parallel to achieve:
multitasking reads the transaction information check list;
judging whether a first system mark and a second system mark are inserted into the target check record read in the transaction information check table at present;
if not, determining the target check record as an abnormal check record, decompressing the first/second record compressed file in which the target check record is inserted, and writing the decompression result and the first/second system mark in which the target check record is inserted into the abnormal file.
5. A transaction verification device, the device comprising:
the configuration module is used for acquiring a configured transaction information check list, and a plurality of columns in the transaction information check list correspond to the primary key, the first system mark, the first record compression file, the second system mark and the second record compression file respectively;
the loading module is used for loading first transaction data of a first system, inserting the first transaction data into the transaction information check table, wherein first transaction records of the first transaction data are in one-to-one correspondence with first check records of the transaction information check table, and the first check records are inserted into a main key, a first system mark and a first record compression file of the corresponding first transaction records; loading second transaction data of a second system, inserting the second transaction data into the transaction information check table, wherein the second transaction records of the second transaction data are in one-to-one correspondence with the second check records of the transaction information check table, the second check records are inserted into a main key, a second system mark and a second record compression file of the corresponding second transaction records, and a group of first check records and second check records with the same main key are merged into one check record;
And the export module is used for exporting an abnormal file according to the non-merged check record in the transaction information check table, wherein the abnormal file can indicate the transaction record corresponding to the non-merged check record.
6. The device according to claim 5, wherein the loading module for loading first transaction data of a first system, inserting the first transaction data into the transaction information check table, is specifically configured to:
generating a plurality of first load jobs, the plurality of first load jobs being processed in distributed parallel to:
multitasking blocks read the first transaction data; analyzing key fields of a target first transaction record read in the first transaction data currently to be spliced into a primary key; setting a first system mark of the target first transaction record; compressing the target first transaction record to obtain a first record compressed file; and distributing corresponding target first check records for the target first transaction records in the transaction information check list, and inserting the primary key, the first system mark and the first record compression file of the target first transaction records into the target first check records.
7. The device according to claim 5, wherein the loading module for loading second transaction data of a second system, inserting the second transaction data into the transaction information check table, is specifically configured to:
generating a plurality of second load jobs, the plurality of second load jobs being processed in distributed parallel to achieve:
multitasking blocks read the second transaction data; analyzing key fields of a target second transaction record read in the second transaction data currently to be spliced into a primary key; setting a second system mark of the target second transaction record; compressing the target second transaction record to obtain a second record compressed file; and distributing a target second check record for the target second transaction record in the transaction information check table, and inserting a primary key, a second system mark and a second record compression file of the target second transaction record into the target second check record.
8. The apparatus according to claim 5, wherein the deriving module is configured to derive an exception file from a non-merged audit record in the transaction information audit table, in particular:
Generating a plurality of export jobs, the plurality of export jobs being processed in distributed parallel to achieve:
multitasking reads the transaction information check list; judging whether a first system mark and a second system mark are inserted into the target check record read in the transaction information check table at present; if not, determining the target check record as an abnormal check record, decompressing the first/second record compressed file in which the target check record is inserted, and writing the decompression result and the first/second system mark in which the target check record is inserted into the abnormal file.
9. An electronic device, the electronic device comprising: at least one memory and at least one processor; the memory stores a program, and the processor calls the program stored in the memory, the program being for implementing the transaction verification method according to any one of claims 1 to 4.
10. A storage medium having stored therein computer executable instructions for performing the transaction verification method of any one of claims 1-4.
CN202110919547.1A 2021-08-11 2021-08-11 Transaction verification method, device, electronic equipment and storage medium Active CN113626510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110919547.1A CN113626510B (en) 2021-08-11 2021-08-11 Transaction verification method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110919547.1A CN113626510B (en) 2021-08-11 2021-08-11 Transaction verification method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113626510A CN113626510A (en) 2021-11-09
CN113626510B true CN113626510B (en) 2024-02-13

Family

ID=78384499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110919547.1A Active CN113626510B (en) 2021-08-11 2021-08-11 Transaction verification method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113626510B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579583B (en) * 2022-05-05 2022-08-05 杭州太美星程医药科技有限公司 Form data processing method and device and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577571A (en) * 2013-10-31 2014-02-12 北京奇虎科技有限公司 Data processing method and device
CN110245116A (en) * 2019-06-06 2019-09-17 深圳前海微众银行股份有限公司 Reconciliation data processing method, device, equipment and computer readable storage medium
CN112685484A (en) * 2020-12-24 2021-04-20 航天信息软件技术有限公司 Transaction account checking method and device, storage medium and electronic equipment
CN112801616A (en) * 2021-01-28 2021-05-14 中国工商银行股份有限公司 Abnormal account book processing method and device
CN113077234A (en) * 2021-04-09 2021-07-06 远光软件股份有限公司 Account checking method, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577571A (en) * 2013-10-31 2014-02-12 北京奇虎科技有限公司 Data processing method and device
CN110245116A (en) * 2019-06-06 2019-09-17 深圳前海微众银行股份有限公司 Reconciliation data processing method, device, equipment and computer readable storage medium
CN112685484A (en) * 2020-12-24 2021-04-20 航天信息软件技术有限公司 Transaction account checking method and device, storage medium and electronic equipment
CN112801616A (en) * 2021-01-28 2021-05-14 中国工商银行股份有限公司 Abnormal account book processing method and device
CN113077234A (en) * 2021-04-09 2021-07-06 远光软件股份有限公司 Account checking method, device and storage medium

Also Published As

Publication number Publication date
CN113626510A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
US11711420B2 (en) Automated management of resource attributes across network-based services
CN107391653B (en) Distributed NewSQL database system and picture data storage method
US10713589B1 (en) Consistent sort-based record-level shuffling of machine learning data
US10366053B1 (en) Consistent randomized record-level splitting of machine learning data
US20180165348A1 (en) Distributed storage of aggregated data
Dede et al. Performance evaluation of a mongodb and hadoop platform for scientific data analysis
CN112905595A (en) Data query method and device and computer readable storage medium
Nicolae et al. BlobSeer: Bringing high throughput under heavy concurrency to Hadoop Map-Reduce applications
Logothetis et al. Stateful bulk processing for incremental analytics
Tigani et al. Google bigquery analytics
Padhy Big data processing with Hadoop-MapReduce in cloud systems
JP2019220195A (en) System and method for implementing data storage service
US9772911B2 (en) Pooling work across multiple transactions for reducing contention in operational analytics systems
Thakkar et al. Scaling hyperledger fabric using pipelined execution and sparse peers
CN104813276A (en) Streaming restore of a database from a backup system
US10951540B1 (en) Capture and execution of provider network tasks
US11409781B1 (en) Direct storage loading for adding data to a database
CN113626510B (en) Transaction verification method, device, electronic equipment and storage medium
CN112559525B (en) Data checking system, method, device and server
Tsai et al. Data Partitioning and Redundancy Management for Robust Multi-Tenancy SaaS.
Vernik et al. Stocator: Providing high performance and fault tolerance for apache spark over object storage
Moise et al. Improving the Hadoop map/reduce framework to support concurrent appends through the BlobSeer BLOB management system
CN113760822A (en) HDFS-based distributed intelligent campus file management system optimization method and device
Jamal et al. Performance Comparison between S3, HDFS and RDS storage technologies for real-time big-data applications
Ren et al. Application Massive Data Processing Platform for Smart Manufacturing Based on Optimization of Data Storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant