CN117011877A - Financial contract auditing method and device, electronic equipment and storage medium - Google Patents

Financial contract auditing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117011877A
CN117011877A CN202311000116.0A CN202311000116A CN117011877A CN 117011877 A CN117011877 A CN 117011877A CN 202311000116 A CN202311000116 A CN 202311000116A CN 117011877 A CN117011877 A CN 117011877A
Authority
CN
China
Prior art keywords
text
signed
segmentation
contract
financial contract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311000116.0A
Other languages
Chinese (zh)
Inventor
苏沁宁
詹乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202311000116.0A priority Critical patent/CN117011877A/en
Publication of CN117011877A publication Critical patent/CN117011877A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/418Document matching, e.g. of document images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures
    • G06V30/19013Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Character Input (AREA)

Abstract

The application provides a financial contract auditing method, a financial contract auditing device, electronic equipment and a storage medium, which comprise the steps of determining a pre-trained target segmentation model according to a template type corresponding to a financial contract to be audited; inputting the to-be-signed financial contract and the signed financial contract into a target segmentation model in a picture format respectively to output text segmentation information corresponding to the to-be-signed financial contract and the signed financial contract respectively; inputting text segmentation screenshots corresponding to segmentation screenshots of the financial contract to be signed and the signed financial contract respectively into a pre-trained optical character recognition model so as to output text recognition results corresponding to each text segmentation screenshot; and performing text matching on text recognition results corresponding to text segmentation screenshots with the same text category between the to-be-signed financial contract and the signed financial contract to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard.

Description

Financial contract auditing method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of data processing, in particular to a financial contract auditing method, a financial contract auditing device, electronic equipment and a storage medium.
Background
A contract is a credential for various business transactions to occur. Because of the abundance of usage scenarios, a large amount of contract text data is generated in the physical world. The large amount of contract text exacerbates the contract review time and also lengthens the waiting period for the customer. More schemes in industry are to directly perform text recognition on contract text, match text lines and compare the text to confirm whether the contract is modified.
However, when the method is used for contract text comparison, the problems of interlacing and wrong running often occur, and finally, the comparison result has deviation, the auditing progress is influenced, and the workload of manual secondary auditing is increased. Therefore, a contract auditing method with more accurate auditing results and higher efficiency is needed.
Disclosure of Invention
Accordingly, the present application aims to provide a financial contract auditing method, device, electronic equipment and storage medium, so as to improve the contract auditing efficiency and accuracy.
In a first aspect, the application provides a method for auditing a financial contract, which comprises determining a pre-trained target segmentation model according to a template type corresponding to the financial contract to be audited, wherein the financial contract to be audited comprises a financial contract to be signed formulated by one of two parties of the contract and a signed financial contract fed back by the other party of the two parties of the contract; inputting the financial contract to be signed and the signed financial contract into a target segmentation model in a picture format respectively to output text segmentation information corresponding to the financial contract to be signed and the signed financial contract, wherein the text segmentation information at least comprises text segmentation screenshots and text categories corresponding to each text segmentation screenshot; inputting respective text segmentation screenshots of the financial contract to be signed and the signed financial contract into a pre-trained optical character recognition model to output text recognition results corresponding to each text segmentation screenshot; and performing text matching on text recognition results corresponding to text segmentation screenshots with the same text category between the to-be-signed financial contract and the signed financial contract to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard.
Preferably, the text segmentation information further includes vertex coordinates of each text segmentation screenshot, and before executing text matching of text recognition results corresponding to text segmentation screenshots with the same text category between the to-be-signed financial contract and the signed financial contract, to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard, the method further includes: sequencing all text segmentation screenshots of a financial contract to be signed and any one of the signed financial contracts based on vertex coordinates of all text segmentation screenshots of the contract, and combining the text segmentation screenshots of the financial contract to be signed and the text segmentation screenshots of the signed financial contract with the same serial numbers into a screenshot data set; aiming at each screenshot data group, matching text categories of two text segmentation screenshots in the screenshot data group; if the text category between each screenshot data group is successfully matched, executing a text matching step; if a group with the text category not matched exists, marking two text segmentation screenshots in the group.
Preferably, the text recognition result includes a plurality of fields and recognition frame coordinates of each field, and the step of text matching is performed on the text recognition result corresponding to the text segmentation screenshot with the same text category between the to-be-signed financial contract and the signed financial contract to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard, specifically includes: aiming at any text segmentation screenshot, arranging and connecting fields in a text recognition result corresponding to the text segmentation screenshot according to the recognition frame coordinates of the fields to obtain texts in the text segmentation screenshot; performing word-by-word matching on texts of the text segmentation screenshots with the same two text categories; if the words between the texts of the text segmentation screenshots with the same text category in each group are matched, determining that the financial contract to be signed and the signed financial contract meet the contract signing standard, otherwise, not meeting the contract signing standard.
Preferably, if the words between the texts of the text segmentation shots with the same text category in each group are not completely matched, marking the two text segmentation shots; and generating and outputting a financial contract auditing report according to the labeling conditions of all the text segmentation screenshots.
Preferably, the object segmentation model comprises a first feature extraction unit, a second feature extraction unit, a third feature extraction unit, a fourth feature extraction unit, a region generation network unit, a segmentation unit, a classification unit and a detection unit, wherein the input of the first feature extraction unit serves as the input of the object segmentation model, the output of the first feature extraction unit is connected with the input of the second feature extraction unit, the output of the second feature extraction unit is connected with the input of the third feature extraction unit, the output of the third feature extraction unit is connected with the input of the region generation network unit, the output of the region generation network unit is connected with the input of the fourth feature extraction unit, the output of the region generation network unit is further connected with the input of the segmentation unit, the output of the segmentation unit serves as the first output of the object segmentation model, the output of the fourth feature extraction unit is connected with the input of the classification unit, the output of the classification unit serves as the second output of the object segmentation model, the output of the text category corresponding to the text segmentation screenshot is used for outputting the text segmentation vertex coordinates of the object segmentation model, the output of the detection unit serves as the third output of the segmentation vertex of the object segmentation screenshot.
Preferably, each of the first feature extraction unit, the second feature extraction unit, the third feature extraction unit and the fourth feature extraction unit comprises a plurality of attention subunits which are sequentially connected, each attention subunit comprises a plurality of single-head attention blocks, each single-head attention block comprises a plurality of attention layers, a pooling layer, a direct connection layer and a full connection layer, and the number of the attention layers in different feature extraction units is different.
Preferably, the target segmentation model is generated by training in the following way: inputting the marked training samples into an initial segmentation model to be trained for each training sample, so as to respectively determine a first loss value corresponding to the output of the regional generation network unit, a second loss value corresponding to the output of the segmentation unit, a third loss value corresponding to the output of the classification unit and a fourth loss value corresponding to the output of the detection unit; taking the sum of the first loss value, the second loss value, the third loss value and the fourth loss value as a total loss value;
and performing parameter adjustment on the initial segmentation model based on the total loss value to generate a target segmentation model.
Preferably, for each screenshot data set, matching of text categories is performed by: determining whether text categories of two text segmentation screenshots in the screenshot data group are the same; if the screenshot data sets are the same, determining that the screenshot data sets are successfully matched; if not, the screenshot data sets are determined to be not matched.
Preferably, the labeling includes text portion matching and text category mismatch, and the financial contract audit report is generated by: when the text categories are not matched, the financial contract to be signed and the signed financial contract are completely added into a page of a financial contract audit report; when the texts are not matched, text segmentation screenshots of mismatch between the financial contract to be signed and the signed financial contract are respectively added into pages of the financial contract audit report.
Preferably, when the text category mismatch and the text mismatch exist simultaneously, the financial contract to be signed and the signed financial contract are completely added in a page of a financial contract audit report, and a first screenshot frame is generated according to vertex coordinates of text segmentation screenshots corresponding to the text mismatch marked, and the first screenshot frame is added in an upper layer of the financial contract to be signed and the signed financial contract so as to generate the financial contract audit report.
Preferably, for the text segmentation screenshot with the same text category of each group, when there is a mismatch text between the text segmentation screenshot and the screenshot, vertex coordinates of a text recognition frame are determined and recorded.
Preferably, when the texts are not matched, a second screenshot frame is generated according to the vertex coordinates of the text recognition frame and is added to the upper layer of the corresponding text segmentation diagram, so that a financial contract auditing report is generated.
In a second aspect, the present application provides an auditing apparatus for financial contracts, the apparatus comprising:
the pre-selection module is used for determining a pre-trained target segmentation model according to the template type corresponding to the financial contract to be audited, wherein the financial contract to be audited comprises a financial contract to be signed which is drawn by one of the two parties of the contract and a signed financial contract which is fed back by the other party of the two parties of the contract;
the system comprises a segmentation module, a target segmentation module and a target segmentation module, wherein the segmentation module is used for inputting a financial contract to be signed and a financial contract signed respectively in a picture format to output text segmentation information corresponding to the financial contract to be signed and the financial contract signed respectively, and the text segmentation information at least comprises text segmentation screenshots and text categories corresponding to the segmentation screenshots of each text segmentation screenshot;
the recognition module is used for inputting text segmentation screenshots corresponding to the to-be-signed financial contract and the segmentation screenshots of the signed financial contract respectively into a pre-trained optical character recognition model so as to output text recognition results corresponding to each text segmentation screenshot;
and the checking module is used for carrying out text matching on text recognition results corresponding to text segmentation screenshots with the same text category between the financial contract to be signed and the signed financial contract so as to determine whether the financial contract to be signed and the signed financial contract meet the contract signing standard.
In a third aspect, the present application also provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory are communicated through the bus, and the machine-readable instructions are executed by the processor to perform the steps of the checking method of financial contracts.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a financial contract auditing method as described above.
The application provides a financial contract auditing method, device, electronic equipment and storage medium, which comprise the steps of determining a pre-trained target segmentation model according to a template type corresponding to a financial contract to be audited, wherein the financial contract to be audited comprises a financial contract to be audited formulated by one of the two parties of the contract and a signed financial contract fed back by the other party of the two parties of the contract; inputting the financial contract to be signed and the signed financial contract into a target segmentation model in a picture format respectively to output text segmentation information corresponding to the financial contract to be signed and the signed financial contract, wherein the text segmentation information at least comprises text segmentation screenshots and text categories corresponding to each text segmentation screenshot; inputting text segmentation screenshots corresponding to segmentation screenshots of the financial contract to be signed and the signed financial contract respectively into a pre-trained optical character recognition model so as to output text recognition results corresponding to each text segmentation screenshot; and performing text matching on text recognition results corresponding to text segmentation screenshots with the same text category between the to-be-signed financial contract and the signed financial contract to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard. The contract is firstly divided according to the areas, and then each divided area is subjected to text extraction, so that text matching is performed on the area level, the serial probability in the text recognition process is reduced, and the contract auditing efficiency and accuracy are improved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for auditing a financial contract according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating the comparison steps of a text segmentation screenshot according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a comparison step of text recognition results according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a device for auditing a financial contract according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, every other embodiment obtained by a person skilled in the art without making any inventive effort falls within the scope of protection of the present application.
First, an application scenario to which the present application is applicable will be described. The application can be applied to auditing the contract before and after signing in the transaction process.
A contract is a credential for various business transactions to occur. Because of the abundance of usage scenarios, a large amount of contract text data is generated in the physical world. The large amount of contract text exacerbates the contract review time and also lengthens the waiting period for the customer. More schemes in industry are to directly perform text recognition on contract text, match text lines and compare the text to confirm whether the contract is modified. However, when the method is used for contract text comparison, the problems of interlacing and wrong running often occur, and finally, the comparison result has deviation, the auditing progress is influenced, and the workload of manual secondary auditing is increased. Therefore, a contract auditing method with more accurate auditing results and higher efficiency is needed.
Based on the above, the embodiment of the application provides a financial contract auditing method, a financial contract auditing device, electronic equipment and a storage medium, so as to improve contract auditing efficiency and accuracy.
Referring to fig. 1, fig. 1 is a flowchart of a method for auditing a financial contract according to an embodiment of the present application. As shown in fig. 1, an auditing method of financial contracts provided by an embodiment of the present application includes:
S101, determining a pre-trained target segmentation model according to the template type corresponding to the financial contract to be audited, wherein the financial contract to be audited comprises a financial contract to be signed drawn by one of the two parties of the contract and a signed financial contract fed back by the other party of the two parties of the contract.
The financial contract can be transaction in financial fields such as financial accounting, securities, banks and the like, payment agreement generated in the payment process, transaction contract, insurance policy and the like. But also transaction contracts signed by other fields. The financial contract to be signed is a contract which is formulated for one of the transaction parties and transferred to the other party and requires the other party to sign confirmation. The signed financial contract is a contract which is fed back to one party after the other party in the transaction receives the financial contract to be signed, which is given by one party in the transaction. The content of the contract to be signed and the signed financial contract here should be completely constant, but in order to prevent the contract content from being modified, it is necessary to collate the two financial contracts. The financial contract can be electronic or paper, but needs to be converted or scanned into a picture form for further processing.
It will be appreciated that for different kinds of transactions, the financial contracts herein are typically of a fixed template, and the format of the contracts is mostly similar, so that for the same format of financial contract herein, a corresponding segmentation model can be trained. Therefore, in step S101, before examining the financial contract, a corresponding model may be selected according to the template of the financial contract, so as to ensure accuracy of segmentation.
S102, inputting the financial contract to be signed and the signed financial contract into a target segmentation model in a picture format respectively to output text segmentation information corresponding to the financial contract to be signed and the signed financial contract, wherein the text segmentation information at least comprises text segmentation screenshots and text categories corresponding to the text segmentation screenshots.
The segmentation model can be segmented according to the layout division and text content of financial contracts, and the text categories can be title, party A and Party B information, content column, list, clause and seal and the like by way of example. The text category herein may be determined based on the content of the financial contract.
Each text segmentation screenshot is an area corresponding to a text category taken from a financial contract.
S103, inputting the text segmentation screenshots of the financial contract to be signed and the signed financial contract to a pre-trained optical character recognition model so as to output a text recognition result corresponding to each text segmentation screenshot.
In step S103, a text extraction may be performed on each text segmentation screenshot using an optical character recognition (OCR, optical Character Recognition) model to obtain all text in the screenshot.
S104, text matching is carried out on text recognition results corresponding to text segmentation screenshots with the same text category between the to-be-signed financial contract and the signed financial contract, so as to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard.
Specifically, for each screenshot data group, matching of text categories is performed by: it is determined whether the text categories of the two text segmentation shots within the set of shot data are the same. If the screenshot data sets are identical, the screenshot data sets are determined to be successfully matched, and if the screenshot data sets are not identical, the screenshot data sets are determined to be not matched.
The first text recognition result corresponding to the title extracted from the financial contract to be signed and the second text recognition result corresponding to the title extracted from the signed financial contract are matched word by word, whether the titles of the two contracts are completely consistent is determined, and the like, and characters of the first side information, the content column, the list, the clause, the seal and other areas are sequentially compared according to the typesetting sequence of the financial contract to complete auditing of the financial contract, and whether the contents between the financial contract to be signed and the signed financial contract are changed is determined. And for the unmatched part, the corresponding text segmentation screenshot can be arranged to generate an audit report uniformly, so that secondary audit is facilitated, or the verification is directly carried out manually.
According to the financial contract auditing method provided by the embodiment of the application, the contract is segmented according to the regions, and then each segmented region is subjected to text extraction, so that text matching is performed on the region level, the serial probability in the text recognition process is reduced, and the contract auditing efficiency and accuracy are improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a comparison procedure of a text segmentation screenshot according to an embodiment of the present application. As shown in fig. 2, the text segmentation information further includes vertex coordinates of each text segmentation screenshot, and before performing text matching of text recognition results corresponding to text segmentation shots having the same text category between the to-be-signed financial contract and the signed financial contract to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing criteria, the text segmentation shots may be further compared by:
s201, aiming at any contract in the to-be-signed financial contract and the signed financial contract, sequencing all text segmentation screenshots based on vertex coordinates of all text segmentation screenshots of the contract, and combining the text segmentation screenshots of the to-be-signed financial contract and the text segmentation screenshots of the signed financial contract with the same serial numbers into a screenshot data set.
It will be appreciated that the text split shots may be ordered according to the typesetting position of the text split shots in the financial contract, for example, the text split shots may be ordered according to the order of title, first side information, content bar, list, clause and seal, where for a financial contract, the number of text split shots corresponding to each text category is at least one, for example, first side information may be divided into one text split shot, second side information may be divided into another text split shot, and there is a fixed typesetting position between the two text split shots, so the order between the two text split shots is also fixed. For example, the first information and the second information may be arranged in the order of the first information and the second information.
S202, matching text categories of two text segmentation screenshots in each screenshot data group according to each screenshot data group.
And S203, if the text categories among the screenshot data groups are successfully matched, executing a text matching step.
For example, for the text segmentation screenshot sequence after the to-be-signed financial contract and the signed financial contract are respectively ordered, the text segmentation screenshot of the first order in the to-be-signed financial contract and the text segmentation screenshot of the first order in the signed financial contract are subjected to text category matching, if the text categories corresponding to the two text segmentation screenshots are consistent, for example, the text categories are all titles, the group matching is determined to be successful, and if all screenshot data sets are successfully matched, step S104 can be performed.
S204, if a group with the text category not matched exists, marking two text segmentation screenshots in the group.
For two text segmentation screenshots in the screenshot data sets with unmatched text categories, the two text segmentation screenshots need to be marked and can be marked as the screenshot data sets with unmatched text categories so as to facilitate subsequent manual auditing.
Illustratively, a first text recognition result corresponding to a title extracted from a financial contract to be signed and a second text recognition result corresponding to a title extracted from a signed financial contract are matched verbatim, whether the titles of the two contracts are completely consistent is determined, and so on, a third text recognition result corresponding to first and second party information extracted from the financial contract to be signed and a fourth text recognition result corresponding to first and second party information extracted from the signed financial contract are matched verbatim, and so on.
Fig. 3 is a flowchart of a comparison step of text recognition results according to an embodiment of the present application. In one embodiment of the present application, the text recognition result includes a plurality of fields and recognition frame coordinates of each field, and the step of text matching text recognition results corresponding to text segmentation screenshots with the same text category between the to-be-signed financial contract and the signed financial contract to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard specifically includes:
S301, aiming at any text segmentation screenshot, arranging and connecting fields in a text recognition result corresponding to the text segmentation screenshot according to the recognition frame coordinates of the fields to obtain texts in the text segmentation screenshot.
For multiple fields identified in any text segmentation screenshot, it is first necessary to splice according to their order in the screenshot to obtain a complete text paragraph.
S302, matching the texts of the text segmentation screenshots with the same two text categories word by word.
S303, if the words among the texts of the text segmentation screenshot with the same text category in each group are matched, determining that the financial contract to be signed and the signed financial contract meet the contract signing standard, or else, not meeting the contract signing standard.
Here, it is necessary to compare whether the text on each of the sites is identical, and if so, it can be determined that the contract signing criteria are satisfied, indicating that the contract contents between the to-be-signed financial contract and the signed financial contract have not been modified.
S304, marking the two text segmentation screenshots if the words among the texts of the text segmentation screenshots with the same text category in each group are not completely matched.
Furthermore, financial contract auditing reports can be generated and output according to the labeling conditions of all text segmentation screenshots.
All marked text segmentation screenshots can be arranged to generate financial contract auditing reports, and staff can clearly determine whether two financial contracts are modified or not, conduct secondary auditing and timely find out problems, so that loss is avoided.
In one embodiment of the application, the financial contract audit report may be generated by several schemes:
first, labeling cases include text portion matches and text category mismatches, and financial contract audit reports are generated by: when the text categories are not matched, the financial contract to be signed and the signed financial contract are completely added into a page of a financial contract audit report; when the texts are not matched, text segmentation screenshots of mismatch between the financial contract to be signed and the signed financial contract are respectively added into pages of the financial contract audit report.
When the text category mismatch and the text mismatch exist simultaneously, the financial contract to be signed and the signed financial contract are completely added in a page of a financial contract auditing report, and a first screenshot frame is generated according to vertex coordinates of text segmentation screenshots corresponding to the text mismatch marked, and the first screenshot frame is added in an upper layer of the financial contract to be signed and the signed financial contract so as to generate the financial contract auditing report.
In addition to financial contracts or screenshots, the report should be marked with at least specific errors, such as which text category failed to match, which text failed to match, etc.
Further, for the text segmentation screenshot with the same text category of each group, when there is a mismatch text between the text segmentation screenshot and the text segmentation screenshot, vertex coordinates of a text recognition frame are determined and recorded.
When the texts are not matched, a second screenshot frame is generated according to the vertex coordinates of the text recognition frame and is added to the upper layer of the corresponding text segmentation diagram, so that a financial contract auditing report is generated.
Therefore, when the auditor takes the financial contract audit report to carry out secondary verification, the point of failure in matching can be seen at a glance, and the overall audit efficiency is improved.
In one embodiment of the application, a segmentation model corresponding to a template type is constructed by:
firstly, selecting an initial segmentation model, wherein a Swin backbone network is selected as a prototype for the construction of three branches of segmentation, detection and classification.
The target segmentation model comprises a first feature extraction unit, a second feature extraction unit, a third feature extraction unit, a fourth feature extraction unit, a region generation network unit, a segmentation unit, a classification unit and a detection unit, wherein the input of the first feature extraction unit serves as the input of the target segmentation model, the output of the first feature extraction unit is connected with the input of the second feature extraction unit, the output of the second feature extraction unit is connected with the input of the third feature extraction unit, the output of the third feature extraction unit is connected with the input of the region generation network unit, the output of the region generation network unit is connected with the input of the fourth feature extraction unit, the output of the region generation network unit is further connected with the input of the segmentation unit, the output of the segmentation unit serves as the first output of the target seam model and is used for outputting text segmentation screenshots, the output of the classification corresponding to the text segmentation model, the output of the fourth feature extraction unit is further connected with the input of the detection unit, the output of the detection unit serves as the third output of the target seam model and is used for outputting vertex coordinates of the text segmentation screenshots.
Each of the first feature extraction unit, the second feature extraction unit, the third feature extraction unit and the fourth feature extraction unit comprises a plurality of attention subunits which are sequentially connected, each attention subunit comprises a plurality of single-head attention blocks, each single-head attention block comprises a plurality of attention layers, a pooling layer, a direct connection layer (Short cut) and a full connection layer, and the number of the attention layers in different feature extraction units is different.
The attention layer here consists of key, query and value matrix operations. The plurality of single-head attention blocks form an attention subunit having a multi-head attention mechanism. The number of the attention layers in each single-head attention block in the first feature extraction unit, the second feature extraction unit, the third feature extraction unit and the fourth feature extraction unit is set according to the proportion of 3:6:6:9.
Based on maskrnn, a region generation network (RPN, region Proposal Networks) unit is added between the third feature extraction unit and the fourth feature extraction unit, where the region generation network unit is composed of a convolution layer, an activation layer, and a pooling layer.
On the basis of a feature extraction backbone network, three parallel pipline are arranged for realizing identification and detection of the same content, wherein the feature vector output by the third feature extraction unit passes through a region generation network unit (outputting the coordinates of a detection frame), and finally a segmentation unit cuts out a corresponding text seam screenshot. The partitioning unit here is composed of a block structure (including a convolution layer, a pooling layer, an activation layer, and a Short cut layer) and a deconvolution layer, which are composed by resnet. Finally, the features are mapped into a screenshot of the image size k dimension by a 1x1 convolution.
And the feature vector output by the fourth feature extraction unit is classified by the classification unit, so that the text category of the corresponding text segmentation screenshot is obtained. The classifying unit here adopts an MLP (Multi-Layer Perceptron).
The detection unit is similar to the structure of the classification unit, and is used for outputting vertex coordinates of the text segmentation screenshot, wherein the vertex coordinates are used for indicating the relative position of the text segmentation screenshot in the corresponding financial contract picture.
In one embodiment of the application, the target segmentation model is generated by training in the following way:
step one: inputting the marked training samples into an initial segmentation model to be trained for each training sample, so as to respectively determine a first loss value corresponding to the output of the regional generation network unit, a second loss value corresponding to the output of the segmentation unit, a third loss value corresponding to the output of the classification unit and a fourth loss value corresponding to the output of the detection unit;
step two: taking the sum of the first loss value, the second loss value, the third loss value and the fourth loss value as a total loss value;
step three: and carrying out parameter adjustment on the initial segmentation model based on the total loss value to generate a target segmentation model.
Then, performing task definition, loss function design and iterative training based on a two-stage segmentation model training mode of a transducer backbone network:
and determining a proper training sample, wherein the training sample is a template system unified financial contract. The financial contract is different from other contracts, has stable structural characteristics, and can be divided into a title, a first and second party information, a content column, a list, a clause, a seal and the like after analysis of the acquired conditions.
And marking the collected training samples on a cloud marking platform according to the detection frame, the text segmentation area and the corresponding text category, and dividing the training samples into a training set, a verification set and a test set to form a complete data set.
For the design part of the Loss function, the Loss function comprises four parts, the RPN layer output and the real frame calculate corresponding MSE Loss (first Loss value), the detection layer output and the real frame also calculate corresponding MSE Loss (second Loss value), the classification layer and the real class obtain CE Loss (third Loss value), the output of the segmentation layer calculates Focal Loss (fourth Loss value) according to the pixel point and the real MASK, and finally the four Loss are combined to form the total Loss.
The learning rate is set to be 1e-3, the batch_size is set to be 64, the resolution is set to be 1024 (short sides are complemented in a padding mode), the optimizer adopts an adam optimizer, and training is iterated according to a gradient descent algorithm until the Loss variation range of the model is small, and a trained segmentation model is obtained.
Based on the same inventive concept, the embodiment of the application also provides a financial contract auditing device corresponding to the financial contract auditing method, and because the principle of solving the problem by the device in the embodiment of the application is similar to that of the financial contract auditing method in the embodiment of the application, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an audit device for financial contracts according to an embodiment of the present application. As shown in fig. 4, the auditing apparatus 400 includes:
the preselection module 410 is configured to determine a pre-trained target segmentation model according to a template type corresponding to a financial contract to be audited, where the financial contract to be audited includes a financial contract to be signed formulated by one of two parties of the contract and a signed financial contract fed back by the other party of the two parties of the contract;
the segmentation module 420 is configured to input the to-be-signed financial contract and the signed financial contract into the target segmentation model in a format of a picture, respectively, so as to output text segmentation information corresponding to the to-be-signed financial contract and the signed financial contract, where the text segmentation information at least includes text segmentation screenshots and text categories corresponding to each text segmentation screenshot;
The recognition module 430 is configured to input text segmentation shots corresponding to respective segmentation shots of the financial contract to be signed and the signed financial contract to a pre-trained optical character recognition model, so as to output a text recognition result corresponding to each text segmentation shot;
and the checking module 440 is configured to perform text matching on text recognition results corresponding to text segmentation screenshots with the same text category between the to-be-signed financial contract and the signed financial contract, so as to determine whether the to-be-signed financial contract and the signed financial contract meet the contract signing standard.
In a preferred embodiment, the text segmentation information further includes vertex coordinates of each text segmentation screenshot, and a matching module (not shown in the figure) is further included for, for any one of the to-be-signed financial contract and the signed financial contract, sorting all text segmentation screenshots based on the vertex coordinates of all text segmentation screenshots of the contract, and combining the text segmentation screenshot of one to-be-signed financial contract and the text segmentation screenshot of one signed financial contract with the same sequence number into a screenshot data set; aiming at each screenshot data group, matching text categories of two text segmentation screenshots in the screenshot data group; if the text category between each screenshot data group is successfully matched, executing a text matching step; if a group with the text category not matched exists, marking two text segmentation screenshots in the group.
In a preferred embodiment, the text recognition result includes a plurality of fields and recognition frame coordinates of each field, and the verification module 440 is specifically configured to, for any text segmentation screenshot, arrange and connect fields in the text recognition result corresponding to the text segmentation screenshot according to the recognition frame coordinates of the fields, so as to obtain a text in the text segmentation screenshot; performing word-by-word matching on texts of the text segmentation screenshots with the same two text categories; if the words between the texts of the text segmentation screenshots with the same text category in each group are matched, determining that the financial contract to be signed and the signed financial contract meet the contract signing standard, otherwise, not meeting the contract signing standard.
In a preferred embodiment, the checking module 440 is further configured to mark the text segmentation shots of each group of text category if the words between the text of the text segmentation shots are not completely matched; and generating and outputting a financial contract auditing report according to the labeling conditions of all the text segmentation screenshots.
In a preferred embodiment, the object segmentation model comprises a first feature extraction unit, a second feature extraction unit, a third feature extraction unit, a fourth feature extraction unit, a region generation network unit, a segmentation unit, a classification unit and a detection unit, wherein the input of the first feature extraction unit serves as the input of the object segmentation model, the output of the first feature extraction unit is connected with the input of the second feature extraction unit, the output of the second feature extraction unit is connected with the input of the third feature extraction unit, the output of the third feature extraction unit is connected with the input of the region generation network unit, the output of the region generation network unit is connected with the input of the fourth feature extraction unit, the output of the region generation network unit is further connected with the input of the segmentation unit, the output of the segmentation unit serves as a first output of the object segmentation model for outputting text segmentation shots, the output of the classification unit serves as a second output of the object segmentation model for outputting text categories corresponding to the text segmentation shots, the output of the fourth feature extraction unit is further connected with the input of the detection unit, the output of the detection unit serves as the output of the third text segmentation shots of the object segmentation model vertices.
In a preferred embodiment, each of the first feature extraction unit, the second feature extraction unit, the third feature extraction unit, and the fourth feature extraction unit includes a plurality of attention sub-units connected in sequence, each of the attention sub-units includes a plurality of single-head attention blocks, and each of the single-head attention blocks includes a plurality of attention layers, a pooling layer, a direct connection layer, and a full connection layer, wherein the number of attention layers in different feature extraction units is different.
In a preferred embodiment, the method further comprises a training module (not shown in the figure) for training and generating the target segmentation model by: inputting the marked training samples into an initial segmentation model to be trained for each training sample, so as to respectively determine a first loss value corresponding to the output of the regional generation network unit, a second loss value corresponding to the output of the segmentation unit, a third loss value corresponding to the output of the classification unit and a fourth loss value corresponding to the output of the detection unit; taking the sum of the first loss value, the second loss value, the third loss value and the fourth loss value as a total loss value; and performing parameter adjustment on the initial segmentation model based on the total loss value to generate a target segmentation model.
In a preferred embodiment, for each screenshot data set, the matching module performs matching of text categories by: determining whether text categories of two text segmentation screenshots in the screenshot data group are the same; if the screenshot data sets are the same, determining that the screenshot data sets are successfully matched; if not, the screenshot data sets are determined to be not matched.
In a preferred embodiment, labeling includes text portion matches and text category mismatches, and financial contract audit reports are generated by: when the text categories are not matched, the financial contract to be signed and the signed financial contract are completely added into a page of a financial contract audit report; when the texts are not matched, text segmentation screenshots of mismatch between the financial contract to be signed and the signed financial contract are respectively added into pages of the financial contract audit report.
In a preferred embodiment, when the text category mismatch exists together with the text mismatch, the financial contract to be signed and the signed financial contract are completely added in the page of the financial contract audit report, and a first screenshot frame is generated according to the vertex coordinates of the text segmentation screenshot corresponding to the text mismatch marked, and added in the upper layers of the financial contract to be signed and the signed financial contract, so as to generate the financial contract audit report.
In a preferred embodiment, for each set of text-segmentation shots of the same text category, when there is a mismatch between the two, the vertex coordinates of the text recognition box are determined and recorded.
In a preferred embodiment, when the texts are not matched, a second screenshot frame is generated according to the vertex coordinates of the text recognition frame and is added at the upper layer of the corresponding text segmentation diagram to generate a financial contract audit report.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 5, the electronic device 500 includes a processor 510, a memory 520, and a bus 530.
The memory 520 stores machine-readable instructions executable by the processor 510, and when the electronic device 500 is running, the processor 510 communicates with the memory 520 through the bus 530, and when the machine-readable instructions are executed by the processor 510, the steps of the method for auditing a financial contract in the method embodiment shown in fig. 1 can be executed, and the detailed implementation is referred to the method embodiment and will not be repeated herein.
The embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the method for auditing a financial contract in the method embodiment shown in fig. 1 may be executed, and a specific implementation manner may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (15)

1. A method of auditing financial contracts, the method comprising:
determining a pre-trained target segmentation model according to the template type corresponding to the financial contract to be checked, wherein the financial contract to be checked comprises a financial contract to be signed, which is drawn by one of the two parties of the contract, and a signed financial contract fed back by the other party of the two parties of the contract;
inputting the financial contract to be signed and the signed financial contract into the target segmentation model in a picture format respectively to output text segmentation information corresponding to the financial contract to be signed and the signed financial contract, wherein the text segmentation information at least comprises text segmentation screenshots and text categories corresponding to each text segmentation screenshot;
inputting respective text segmentation screenshots of the financial contract to be signed and the signed financial contract to a pre-trained optical character recognition model so as to output a text recognition result corresponding to each text segmentation screenshot;
and performing text matching on text recognition results corresponding to text segmentation screenshots with the same text category between the financial contract to be signed and the signed financial contract to determine whether the financial contract to be signed and the signed financial contract meet the contract signing standard.
2. The method of claim 1, wherein the text segmentation information further includes vertex coordinates for each text segmentation screenshot, and wherein prior to the step of performing text matching of text recognition results corresponding to text segmentation shots having the same text category between the to-be-signed financial contract and the signed financial contract to determine whether the to-be-signed financial contract and the signed financial contract meet contract signing criteria, further comprising:
aiming at any contract in the to-be-signed financial contract and the signed financial contract, sequencing all text segmentation screenshots based on vertex coordinates of all text segmentation screenshots of the contract, and combining the text segmentation screenshots of one to-be-signed financial contract and the text segmentation screenshots of one signed financial contract with the same serial numbers into a screenshot data set;
aiming at each screenshot data group, matching text categories of two text segmentation screenshots in the screenshot data group;
if the text category between each screenshot data group is successfully matched, executing a text matching step;
if a group with the text category not matched exists, marking two text segmentation screenshots in the group.
3. The method according to claim 2, wherein the text recognition result includes a plurality of fields and recognition frame coordinates of each field, and the step of text matching text recognition results corresponding to text segmentation shots having the same text category between the to-be-signed fund contract and the signed fund contract to determine whether the to-be-signed fund contract and the signed fund contract satisfy a contract signing standard specifically includes:
aiming at any text segmentation screenshot, arranging and connecting fields in a text recognition result corresponding to the text segmentation screenshot according to the recognition frame coordinates of the fields to obtain texts in the text segmentation screenshot;
performing word-by-word matching on texts of the text segmentation screenshots with the same two text categories;
if the words between the texts of the text segmentation screenshots with the same text category in each group are matched, determining that the financial contract to be signed and the signed financial contract meet the contract signing standard, or else, not meeting the contract signing standard.
4. A method according to claim 3, wherein if the words between the text of the text segmentation shots of the same text category of each group are not perfectly matched, the two text segmentation shots are marked;
And generating and outputting a financial contract auditing report according to the labeling conditions of all the text segmentation screenshots.
5. The method of claim 1, wherein the object segmentation model comprises a first feature extraction unit, a second feature extraction unit, a third feature extraction unit, a fourth feature extraction unit, a region generation network unit, a segmentation unit, a classification unit, and a detection unit,
the input of the first feature extraction unit is used as the input of the target segmentation model, the output of the first feature extraction unit is connected with the input of the second feature extraction unit, the output of the second feature extraction unit is connected with the input of the third feature extraction unit, the output of the third feature extraction unit is connected with the input of the region generation network unit, the output of the region generation network unit is connected with the input of the fourth feature extraction unit, the output of the region generation network unit is further connected with the input of the segmentation unit, the output of the segmentation unit is used as the first output of the target seam model and used for outputting a text segmentation screenshot, the output of the fourth feature extraction unit is connected with the input of the classification unit, the output of the classification unit is used as the second output of the target seam model and used for outputting a text category corresponding to the text segmentation screenshot, the output of the fourth feature extraction unit is further connected with the input of the detection unit, and the output of the detection unit is used as the third output of the target seam model and used for outputting the vertex coordinates of the text segmentation screenshot.
6. The method of claim 5, wherein each of the first feature extraction unit, the second feature extraction unit, the third feature extraction unit, and the fourth feature extraction unit comprises a plurality of sequentially connected attention sub-units, each attention sub-unit comprises a plurality of single-head attention blocks, each single-head attention block comprises a plurality of attention layers, a pooling layer, a direct connection layer, and a full connection layer, wherein the number of attention layers in different feature extraction units is different.
7. The method of claim 5, wherein the generating the target segmentation model is trained by:
inputting the marked training samples into an initial segmentation model to be trained for each training sample, so as to respectively determine a first loss value corresponding to the output of the regional generation network unit, a second loss value corresponding to the output of the segmentation unit, a third loss value corresponding to the output of the classification unit and a fourth loss value corresponding to the output of the detection unit;
taking the sum of the first loss value, the second loss value, the third loss value and the fourth loss value as a total loss value;
and carrying out parameter adjustment on the initial segmentation model based on the total loss value to generate a target segmentation model.
8. The method of claim 2, wherein for each screenshot data set, matching of text categories is performed by: determining whether text categories of two text segmentation screenshots in the screenshot data group are the same;
if the screenshot data sets are the same, determining that the screenshot data sets are successfully matched;
if not, the screenshot data sets are determined to be not matched.
9. The method of claim 4, wherein labeling includes text portion matches and text category mismatches, and wherein the financial contract audit report is generated by:
when the text categories are not matched, the financial contract to be signed and the signed financial contract are completely added into a page of a financial contract audit report;
and when the texts are not matched, respectively adding the text segmentation screenshot of the mismatch between the financial contract to be signed and the signed financial contract into a page of a financial contract audit report.
10. The method of claim 9, wherein when a text category mismatch exists together with a text mismatch, the to-be-signed financial contract and the signed financial contract are added in their entirety in a page of a financial contract audit report, and a first screenshot box is generated and added at an upper layer of the to-be-signed financial contract and the signed financial contract according to vertex coordinates of text segmentation shots corresponding to the text mismatch marked to generate a financial contract audit report.
11. The method of claim 10, wherein for each set of text-segmentation shots of the same text category, when there is a mismatch between the text, vertex coordinates of a text recognition box are determined and recorded.
12. The method of claim 10, wherein when text does not match, generating a second screenshot box according to vertex coordinates of the text recognition box and adding the second screenshot box to an upper layer of a corresponding text segmentation map to generate a financial contract audit report.
13. An audit device for financial contracts, said device comprising:
the pre-selection module is used for determining a pre-trained target segmentation model according to the template type corresponding to the financial contract to be checked, wherein the financial contract to be checked comprises a financial contract to be checked drawn by one of the two parties of the contract and a signed financial contract fed back by the other party of the two parties of the contract;
the segmentation module is used for inputting the financial contract to be signed and the signed financial contract into the target segmentation model in a picture format respectively so as to output text segmentation information corresponding to the financial contract to be signed and the signed financial contract respectively, wherein the text segmentation information at least comprises text segmentation screenshots and text categories corresponding to each text segmentation screenshot;
The recognition module is used for inputting text segmentation screenshots corresponding to the segmentation screenshots of the financial contract to be signed and the signed financial contract respectively into a pre-trained optical character recognition model so as to output text recognition results corresponding to each text segmentation screenshot;
and the checking module is used for carrying out text matching on text recognition results corresponding to text segmentation screenshots with the same text category between the financial contract to be signed and the signed financial contract so as to determine whether the financial contract to be signed and the signed financial contract meet the contract signing standard.
14. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said processor executing said machine readable instructions to perform the steps of the method of auditing a financial contract according to any of claims 1 to 12.
15. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which computer program, when being executed by a processor, performs the steps of the method of auditing financial contracts according to any one of claims 1 to 12.
CN202311000116.0A 2023-08-09 2023-08-09 Financial contract auditing method and device, electronic equipment and storage medium Pending CN117011877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311000116.0A CN117011877A (en) 2023-08-09 2023-08-09 Financial contract auditing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311000116.0A CN117011877A (en) 2023-08-09 2023-08-09 Financial contract auditing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117011877A true CN117011877A (en) 2023-11-07

Family

ID=88563301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311000116.0A Pending CN117011877A (en) 2023-08-09 2023-08-09 Financial contract auditing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117011877A (en)

Similar Documents

Publication Publication Date Title
US11816165B2 (en) Identification of fields in documents with neural networks without templates
RU2721189C1 (en) Detecting sections of tables in documents by neural networks using global document context
RU2723293C1 (en) Identification of fields and tables in documents using neural networks using global document context
US20230315770A1 (en) Self-executing protocol generation from natural language text
CN111652232B (en) Bill identification method and device, electronic equipment and computer readable storage medium
CN108595544A (en) A kind of document picture classification method
CN112988963B (en) User intention prediction method, device, equipment and medium based on multi-flow nodes
US11574003B2 (en) Image search method, apparatus, and device
CN112069321A (en) Method, electronic device and storage medium for text hierarchical classification
US11232299B2 (en) Identification of blocks of associated words in documents with complex structures
CN113064973A (en) Text classification method, device, equipment and storage medium
CN110956166A (en) Bill marking method and device
CN110362476A (en) Verification method, device, computer equipment and the storage medium of data conversion tools
US11392798B2 (en) Automation rating for machine learning classification
CN113868419A (en) Text classification method, device, equipment and medium based on artificial intelligence
CN112560855A (en) Image information extraction method and device, electronic equipment and storage medium
CN116416632A (en) Automatic file archiving method based on artificial intelligence and related equipment
CN117011877A (en) Financial contract auditing method and device, electronic equipment and storage medium
CN114549177A (en) Insurance letter examination method, device, system and computer readable storage medium
CN114154480A (en) Information extraction method, device, equipment and storage medium
CN113255836A (en) Job data processing method and device, computer equipment and storage medium
TWI768744B (en) Reference document generation method and system
Tan et al. Information Extraction System for Cargo Invoices
US20230140546A1 (en) Randomizing character corrections in a machine learning classification system
Yao et al. Financial Original Voucher Classification and Verification System Based on Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination