CN113743327A - Document identification method, document checking method, device and equipment - Google Patents

Document identification method, document checking method, device and equipment Download PDF

Info

Publication number
CN113743327A
CN113743327A CN202111046477.XA CN202111046477A CN113743327A CN 113743327 A CN113743327 A CN 113743327A CN 202111046477 A CN202111046477 A CN 202111046477A CN 113743327 A CN113743327 A CN 113743327A
Authority
CN
China
Prior art keywords
document
detected
image
document image
text box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111046477.XA
Other languages
Chinese (zh)
Inventor
聂雪琴
齐蓉
张芳
张敏华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202111046477.XA priority Critical patent/CN113743327A/en
Publication of CN113743327A publication Critical patent/CN113743327A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides a document identification method, a document checking device and document checking equipment, which can be applied to the field of artificial intelligence. The method comprises the following steps: acquiring a document image to be detected, wherein the document image to be detected comprises at least one effective area; extracting a first text box and a corresponding first field name of the at least one effective area to form a structural feature; in response to the structural features, classifying the to-be-detected document image by using a preset document classifier so as to determine a document type corresponding to the to-be-detected document image; calling a document template of the same type as the determined document type of the document image to be detected; and acquiring the check data of the document image to be detected according to the document template and the document image to be detected.

Description

Document identification method, document checking method, device and equipment
Technical Field
The present disclosure relates to the field of finance, in particular to the field of artificial intelligence and image recognition, and more particularly to a document identification method, a document reconciliation apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
The documentary credit is a widely used international trade payment mode, and the main collection mode of export trade in China is the documentary credit. When the bill is sent out by exporting the document of the credit, the customer can provide the documents such as the commercial invoice, the packing bill, the bill drawing, the policy keeping, the origin certificate and the like required by the credit through an electronic channel according to the requirement of the credit for document verification and document verification to the bank, and then the paper document is provided for the bank to send out when the bill is sent out in the following process, but at the moment, the bank needs to perform re-verification on the paper document submitted by the customer again, and whether the electronic document submitted by the customer on line is consistent with the paper document submitted by the customer off line is judged. At present, the consistency of the electronic documents and the paper documents is checked by banks and mainly depends on the manual processing of bank business personnel, so that the identification omission easily occurs, the accuracy is low, the processing speed is low, the efficiency is low, and the quality and the efficiency of business processing are influenced.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided a document identification method comprising: acquiring a document image to be detected, wherein the document image to be detected comprises at least one effective area; extracting a first text box and a corresponding first field name of the at least one effective area to form a structural feature; in response to the structural features, classifying the to-be-detected document image by using a preset document classifier so as to determine a document type corresponding to the to-be-detected document image; calling a document template of the same type as the determined document type of the document image to be detected; and acquiring the check data of the document image to be detected according to the document template and the document image to be detected.
According to an embodiment of the present disclosure, the extracting the first text box and the corresponding first field name of the at least one valid area specifically includes: extracting the first text box according to a convolutional neural network; extracting a first field value in the first text box according to an OCR model; and extracting a first field name in the first field value, wherein the first field name comprises a specific character and/or a specific character combination.
According to the embodiment of the disclosure, the document template of the same type as the determined document type of the document image to be detected is called; and extracting the check data of the document image to be detected according to the document template, wherein the check data comprises the following steps: matching a preset document template of the same type as the document image to be detected, wherein the document template comprises: a second text box and a corresponding field type; updating the second text box to the document image to be detected so as to determine the standard character acquisition position of the effective area; and identifying a second field value in the second text box and combining the field type corresponding to the second text box to obtain the verification data.
According to the embodiment of the present disclosure, after identifying the second field value in the second text box, the method further includes: checking whether each word in the second field value exists in a checking dictionary, wherein the checking dictionary comprises a general field word and a professional field word; recording the ratio of all checked words to the total word amount; and when the proportion value is determined to be larger than the threshold value, the identification is successful.
A second aspect of the present disclosure provides a document reconciliation method, comprising: identifying a first to-be-detected document image of a first document by using the document identification method so as to obtain first check data of the first to-be-detected document image; identifying a second to-be-detected document image of a second document by using the document identification method so as to obtain second check data of the second to-be-detected document image; comparing the first collation data with the second collation data; and when the first check data and the second check data are consistent, determining that the first sheet and the second sheet are consistent.
A third aspect of the present disclosure provides a document identification apparatus comprising: an image acquisition module: the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a document image to be detected, and the document image to be detected comprises at least one effective area; a feature extraction module: a first text box and a corresponding first field name for extracting the at least one effective area to form a structured feature; a document classification module: the document classifier is used for classifying the document image to be detected by using a preset document classifier in response to the structural characteristics so as to determine the document type corresponding to the document image to be detected; a template calling module: the document template is used for calling the document template of the same type as the determined document type of the document image to be detected; and a data extraction module: and the verification data is used for acquiring the verification data of the document image to be detected according to the document template and the document image to be detected.
A fourth aspect of the present disclosure provides a document reconciliation apparatus comprising: a first identification module: a first to-be-detected sheet image for identifying a first sheet using the sheet identification method as claimed in claim 1 to obtain first collation data of the first to-be-detected sheet image; a second identification module: identifying a second to-be-detected document image of a second document using the document identification method as claimed in claim 1 to obtain second verification data of the second to-be-detected document image; a checking module: for comparing said first collation data with said second collation data; and an output module: and the first sheet and the second sheet are determined to be consistent when the first check data and the second check data are consistent.
A fifth aspect of the present disclosure provides an electronic device, comprising: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the above-described method of document reconciliation, document identification.
A sixth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above method of document reconciliation, document identification.
A seventh aspect of the present disclosure also provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for bill checking and bill identification is implemented.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
fig. 1 schematically shows an application scenario diagram of a document identification and document checking method and apparatus according to an embodiment of the present disclosure.
FIG. 2 schematically shows a flow diagram of a document identification method according to an embodiment of the disclosure.
Fig. 3 schematically shows a flow chart for extracting a first text box of an active area and a corresponding first field name.
Fig. 4 schematically shows a flowchart for acquiring collation data.
FIG. 5 schematically shows an updated document image.
Fig. 6 schematically shows a flow chart of a method of verifying the correct rate according to an embodiment of the present disclosure.
FIG. 7 schematically shows a flow diagram of a document reconciliation method according to an embodiment of the present disclosure.
FIG. 8 schematically shows a block diagram of a document identification apparatus according to an embodiment of the disclosure.
FIG. 9 schematically shows a block diagram of a document reconciliation apparatus according to an embodiment of the present disclosure.
FIG. 10 schematically illustrates a block diagram of an electronic device adapted to implement document identification, document reconciliation, in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Fig. 1 schematically shows an application scenario diagram of a document identification and document checking method and apparatus according to an embodiment of the present disclosure.
As shown in fig. 1, the system architecture 100 includes a terminal (a plurality of which are shown in the figure, such as terminals 101, 102, 103) and a server (such as server 104), which is connected to an application 105 in a communication manner. The server 104 may be a server for performing document identification and document checking in a bank document identification and document checking platform, or may be a third-party server for performing document identification and document checking tasks independent of the bank document identification and document checking platform, and it is noted that other servers or processors capable of performing document checking tasks are within the scope of the present application.
The method comprises the steps that a server 104 obtains a document image to be detected of a terminal (such as the terminals 101, 102 and 103), wherein the document image to be detected comprises at least one effective area, and the server 104 extracts a first text box and a corresponding first field name of the effective area and forms structural features. The server 104 responds to the structural features, a document classifier preset in the server 104 classifies the document image to be detected and extracts text content to obtain check data.
The server 104 identifies a first to-be-detected document image of a first document by using the document identification method to acquire first check data of the first to-be-detected document image, the server 104 identifies a second to-be-detected document image of a second document by using the document identification method to acquire second check data of the second to-be-detected document image, and the server 104 compares the first check data with the second check data; and when the first check data and the second check data are consistent, determining that the first sheet and the second sheet are consistent.
It should be noted that the document identification method, the document checking method and the device according to the embodiments of the present disclosure may be used in the financial field, and may also be used in any fields other than the financial field. The present disclosure will be described in detail below with reference to the drawings and specific embodiments. It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
A document identification method and a document check method according to an embodiment of the present disclosure will be described in detail below with reference to fig. 2 to 8 based on the scenario described in fig. 1.
Because of the wide variety of documents of letter of credit, including commercial invoices, bill containers, bills of lading, insurance policies, property certificates, etc. And each type of document contains an area of valid information. The document image obtained by the two acquisition modes has the characteristics of flatness, no inclination, consistent image size and the like, so that the document image has an effective area with a fixed size. Therefore, it becomes important to locate and capture the effective area in the document image.
FIG. 2 schematically shows a flow diagram of a document identification method according to an embodiment of the disclosure.
As shown in fig. 2, the method of the embodiment includes operations S210 to S250, and the document identification method may be performed by the server 104.
In operation S210, acquiring a document image to be detected, where the document image to be detected includes at least one effective area;
extracting a first text box and a corresponding first field name of the at least one effective area to form a structured feature in operation S220;
in operation S230, in response to the structural feature, classifying the to-be-detected document image by using a preset document classifier to determine a document type corresponding to the to-be-detected document image;
in operation S240, a document template of the same type as the determined document type of the document image to be detected is called; and
in operation S250, verification data of the to-be-detected document image is acquired according to the document template and the to-be-detected document image.
According to the embodiment of the disclosure, the preprocessing can be further performed before the identification of the document image to be detected, including: graying, binarization, influencing noise reduction, inclination correction, image smoothing and other operations to improve the reliability of feature extraction, image segmentation, matching and identification. The quality of the image in the image recognition directly influences the recognition effect, and the main purpose of the image preprocessing is to remove irrelevant contents in the image, only keep concerned information, enhance the detectability of the relevant information and simplify the data to the maximum extent.
Document classifiers in accordance with embodiments of the present disclosure pertain to supervised classification.
For document classifiers, there are supervised and unsupervised classifications. The supervised classifier obtains an optimal model through training of an existing training sample, maps all inputs into corresponding outputs by utilizing the optimal model, and simply judges the outputs, so that the purposes of prediction and classification are achieved, and the supervised classifier also has the capability of predicting and classifying position data. The data in supervised learning is classified in advance, and the training samples comprise characteristics and label information at the same time. The label information adopts a direct labeling method. For example, the tag information includes a document type tag, specifically: commercial invoices, packing slips, bills of lading, insurance policies, property certificates, and the like. The feature information in the prior art only takes the position information of the effective area as a feature vector to guide model training.
However, if only the position information of the text box is used as the feature vector of the model, the result of the classification is affected by too few feature vectors of the model, and the result of the classification is inaccurate when the generated first text box fluctuates to some extent.
According to an embodiment of the disclosure, a first text box and a corresponding first field name of an active area in a paper document image and/or an electronic document image are extracted and structured features are formed, the first text box comprises a first text box size and a first text box position, and the first field name can be regarded as type description of content in the first text box.
Since the embodiment of the disclosure is mainly described around the use side of the document classification model, and the training side of the document classification model is also based on the structural features, the establishing process of the document classifier is not repeated in the embodiment of the disclosure.
According to the embodiment of the disclosure, in response to the structural features, a preset document classifier classifies the to-be-detected document image to call a document template of the same type and extracts the check data of the to-be-detected document image according to the document template. The verification data comprises a Key-Value relation, namely the type of the effective area corresponds to the content of the effective area, so that the paper document image and the electronic document image can be conveniently verified later.
In the embodiment of the disclosure, the document template of the same type is called, the fixed area in the document template is mapped to the document image to be detected, and then character recognition is carried out on the document image to be detected. The method is beneficial to accelerating the image recognition speed and improving the image recognition efficiency under the condition of examining and verifying a large quantity of documents. Note that one document type corresponds to one preset document template. Of course, based on different used scenes, the method can also be applied to other specific scenes for content checking in the fixed area, and details are not repeated here.
Fig. 3 schematically shows a flow chart for extracting a first text box of an active area and a corresponding first field name.
As shown in fig. 3, the method includes operations S310 to S330.
Extracting the first text box according to a convolutional neural network in operation S310;
extracting a first field value in the first text box according to an OCR model in operation S320; and
in operation S330, a first field name including a specific character and/or a combination of specific characters in the first field value is extracted.
The convolutional neural network is inspired by the biological natural visual cognition mechanism. At present, the convolutional neural network has become one of the research hotspots in the scientific field, and particularly in the field of pattern classification, because the network can directly input an original image by facing to the complex preprocessing of the image, the convolutional neural network is widely applied and can be applied to image classification, target recognition, target detection, semantic segmentation and the like.
According to the embodiment of the disclosure, aiming at a preprocessed document image to be detected, an enclosing frame of an effective area in the document image is extracted through a CNN model and recorded as a first text box, and the first text box comprises the size of the first text box and the position of the first text box. It should be noted that, in the embodiment of the present disclosure, the model for extracting the bounding box of the effective Region in the document image may include a convolutional neural network model based on yolo (young Only lookonce), R-CNN (Region-CNN), Fast R-CNN (Fast Region-CNN), and the like. In the practical application process of the embodiment of the present disclosure, the specific algorithm used may be adjusted according to the actual situation, which is not limited in the embodiment of the present disclosure.
According to the embodiment of the disclosure, after the text box of the effective area is obtained, effective information in the text box is extracted by using a character detection model so as to obtain a first field value. It should be noted that the character detection model may be a general OCR model, and specifically, an OCR technology such as tesseract may be used. The extracted first field value may be one character or a combination of characters such as letters, chinese, numbers, etc.
According to the embodiment of the present disclosure, the first field value generally includes more characters, and it is obviously inappropriate to use the first field value as a feature vector, so that further feature extraction is required for the first field value. In general, the field type of each valid area in the document field is usually the first word or phrase in the first line of the first field, so the first word or phrase in the first line of the first field value is taken as the first field name.
Figure 4 schematically shows a flow chart for obtaining collation data,
as shown in fig. 4, the method includes operations S410 to S430.
In operation S410, a preset document template of the same type as the document image to be detected is matched, where the document template includes: a second text box and a corresponding field type;
in operation S420, updating the second text box to the document image to be detected to determine a standard character collecting position of the effective area;
in operation S430, a second field value in the second text box is identified and the collation data is obtained in combination with a field type corresponding to the second text box.
According to an embodiment of the present disclosure, the structural data is formed by combining a first field name, a first text box position, and a first text box size, for example: first field name-first text box upper left corner position coordinate-first text box width-first text box length. Of course, the position of the text box may also be determined based on other corners of the text box, or determined in other manners, which is not described herein again.
According to the embodiment of the disclosure, the receipt template includes other information besides the second text box and the corresponding field type, referring to table 1, taking the receipt type in table 1 as the receipt template of the loading receipt as an example, the receipt template further includes a template number, a position of an upper left corner of the text box, and whether the size of the text box deletes a first line.
TABLE 1
Figure BDA0003249819920000091
Figure BDA0003249819920000101
For example, the template is labeled as TPLBL0001, and the corresponding document template includes field types corresponding to a plurality of valid areas: B/L No, Carrier, Signal, Shipper, Consigne, Notify Party, Also Notify, Agent, Vessel, Pre-Carriage By, Place of Regt, Port of Display, Port of load, Place of Delivery, Final Destination, For Transshipment.
For example: the field type: B/L No corresponds to the upper left X of the text box: 1227. top left of text box Y: 170. width of text box: 360 and text box height: 82 second text boxes. As another example, the "field type: agent "corresponds to" text box top left X: 826 "," text box top left Y: 378 "," text box width: 679 "," text box height: 145 "and" whether to delete the head line: and Y' is adopted. Wherein, whether to delete the first line indicates that the field type exists in the first line, and then the invalid content of the first line is deleted after the content of the text box recognized by the OCR is utilized again.
According to the embodiment of the disclosure, the second text box and the corresponding field type are updated to the document image to be detected, so that the effective area of the standard position is marked. Referring to fig. 5, taking the updated document image to be detected in fig. 5 as an example, dividing the document image into regions by matching the updated second text box, and having a field type in each corresponding region; meanwhile, in the receipt image, the areas with characters are respectively arranged, the updated receipt image to be detected is not subjected to area division, namely, the characters in the areas do not have effective information, the first text box divides the areas, the second text box avoids the areas, and therefore redundant meaningless character recognition is removed.
According to the embodiment of the present disclosure, after the identification area is divided, the OCR model is used again to identify the effective information in the second text box of the updated document image to be detected, so as to obtain a second field value, for example, the effective information in fig. 5 is identified to obtain "BLNo": "HDMU AEFZ 025637", "Carrier": the field type of 'HYUNDAI' and the content of the field value corresponding to the field type.
Preferably, after the second field Value is extracted via OCR, the Key-Value relationship may be established after checking the correctness of the identified second field Value.
Fig. 6 schematically shows a flow chart of a method of verifying the correct rate according to an embodiment of the present disclosure.
The method as shown in fig. 6 includes operations S610 to S630.
In operation S610, checking whether each word in the second field value exists in a checking dictionary including a general field word and a professional field word;
in operation S620, recording the ratio of all checked words to the total word amount;
in operation S630, it is determined that the ratio is greater than the threshold value, indicating that the recognition is successful.
According to the embodiment of the disclosure, the check dictionary comprises professional field words related to the document image content and general field words unrelated to the document image content, the words in the recognized text in the check dictionary are judged to be valid words, the ratio of the valid words to the recognized total number of words is calculated, and if the ratio is higher than a threshold value, the character recognition of the document image is considered to be valid. It should be noted that, in order to ensure the effective degree of the recognition rate, in the above dictionary construction process, the number of words added should not be too large, so as to avoid excessive words in the dictionary, which may still be determined as effective recognition when the character is wrong; therefore, the proper word holding amount is selected from the check dictionary.
After the identification is successful, extracting a second field value in a second text box of the document template image; and establishing the structured data of the field value and the field name in one-to-one correspondence.
According to the embodiment of the disclosure, the character information checked by the data dictionary, the effective region position information corresponding to the character information, and the name of the corresponding position are together established to check data, and finally, the Key-Value sequence structured check data is output.
FIG. 7 schematically shows a flow diagram of a document reconciliation method according to an embodiment of the present disclosure.
The method includes operations S710 to S740 as described in fig. 7.
In operation S710, identifying a first to-be-detected sheet image of a first sheet by using the sheet identification method to obtain first check data of the first to-be-detected sheet image;
in operation S720, identifying a second to-be-detected document image of a second document by using the document identification method, so as to obtain second verification data of the second to-be-detected document image;
comparing the first collation data and the second collation data in operation S730; and
in operation S740, when the first and second collated data are identical, it is determined that the first sheet and the second sheet are identical.
According to the embodiment of the disclosure, after the Key-Value association is carried out on the second field Value extracted from the paper document image and the electronic document image and the field type, the comparison is carried out, and whether the Key-Value structure tree data of the paper document image is consistent with the Key-Value structure data of the electronic document image is judged. If the two are consistent, returning a result to indicate that the two are consistent; if the difference is inconsistent, the difference between the two marked positions forms a difference list to be checked by the service personnel. The difference list also comprises three dimensions of a document type, a field type and a second field value.
According to the embodiment of the disclosure, the consistency of the electronic document and the paper document is rapidly and accurately identified and judged, if the difference exists, the difference points can be listed in detail to provide effective support for subsequent business handling, and the business handling quality is greatly improved.
FIG. 8 schematically shows a block diagram of a document identification apparatus according to an embodiment of the disclosure.
Taking fig. 8 as an example, the document identification apparatus 800 includes: the system comprises an image acquisition module 810, a feature extraction module 820, a document classification module 830, a template calling module 840 and a data extraction module 850. Wherein the content of the first and second substances,
the image acquisition module 810: the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a document image to be detected, and the document image to be detected comprises at least one effective area;
the feature extraction module 820: a first text box and a corresponding first field name for extracting the at least one effective area to form a structured feature;
document classification module 830: the document classifier is used for classifying the document image to be detected by using a preset document classifier in response to the structural characteristics so as to determine the document type corresponding to the document image to be detected;
template retrieval module 840: the document template is used for calling the document template of the same type as the determined document type of the document image to be detected; and
the data extraction module 850: and the verification data is used for acquiring the verification data of the document image to be detected according to the document template and the document image to be detected.
FIG. 9 schematically shows a block diagram of a document reconciliation apparatus according to an embodiment of the present disclosure.
Taking fig. 9 as an example, the document collating apparatus 900 includes: a first identification module 1010, a second identification module 920, a verification module 930, and an output module 940. Wherein the content of the first and second substances,
the first identification module 910: a first to-be-detected sheet image for identifying a first sheet using the sheet identification method as claimed in claim 1 to obtain first collation data of the first to-be-detected sheet image;
the second identification module 920: identifying a second to-be-detected document image of a second document using the document identification method as claimed in claim 1 to obtain second verification data of the second to-be-detected document image;
the verification module 930: for comparing said first collation data with said second collation data; and
the output module 940: and the first sheet and the second sheet are determined to be consistent when the first check data and the second check data are consistent.
FIG. 10 schematically illustrates a block diagram of an electronic device adapted to implement document identification, document reconciliation, in accordance with an embodiment of the present disclosure.
As shown in fig. 10, an electronic device 1000 according to an embodiment of the present disclosure includes a processor 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. The processor 1001 may include a general purpose microprocessor (CPU according to embodiments of the present disclosure), an instruction set processor and/or related chip set and/or application specific microprocessor (application specific integrated circuit (ASIC) according to embodiments of the present disclosure), and the like, according to embodiments of the present disclosure. The processor 1001 may also include onboard memory for caching purposes. The processor 1001 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the present disclosure.
In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are stored. The processor 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1007. The processor 1001 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 1002 and/or the RAM 1003. Note that the programs may also be stored in one or more memories other than the ROM 1002 and the RAM 1003. The processor 1001 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 1000 may also include an input/output (I/O) interface 1005, the input/output (I/O) interface 1005 also being connected to bus 1004, according to an embodiment of the present disclosure. Electronic device 1000 may also include one or more of the following components connected to I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, and embodiments according to the present disclosure may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. According to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 1002 and/or the RAM 1003 described above and/or one or more memories other than the ROM 1002 and the RAM 1003 according to embodiments of the present disclosure.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the item recommendation method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 1001. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication part 1009, and/or installed from the removable medium 1011. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program performs the above-described functions defined in the system of the embodiment of the present disclosure when executed by the processor 1001. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (through the internet using an internet service provider according to embodiments of the present disclosure).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. Two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved, according to the embodiments of the disclosure. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A document identification method, comprising:
acquiring a document image to be detected, wherein the document image to be detected comprises at least one effective area;
extracting a first text box and a corresponding first field name of the at least one effective area to form a structural feature;
in response to the structural features, classifying the to-be-detected document image by using a preset document classifier so as to determine a document type corresponding to the to-be-detected document image;
calling a document template of the same type as the determined document type of the document image to be detected; and
and acquiring the check data of the document image to be detected according to the document template and the document image to be detected.
2. The method according to claim 1, wherein the extracting the first text box and the corresponding first field name of the at least one valid region specifically comprises:
extracting the first text box according to a convolutional neural network;
extracting a first field value in the first text box according to an OCR model; and
and extracting a first field name in the first field value, wherein the first field name comprises a specific character and/or a specific character combination.
3. The method according to claim 1, wherein the document template of the same type as the determined document type of the document image to be detected is called; and extracting the check data of the document image to be detected according to the document template, wherein the check data comprises the following steps:
matching a preset document template of the same type as the document image to be detected, wherein the document template comprises: a second text box and a corresponding field type;
updating the second text box to the document image to be detected so as to determine the standard character acquisition position of the effective area; and
and identifying a second field value in the second text box and combining the field type corresponding to the second text box to obtain the verification data.
4. The method of claim 3, wherein after identifying the second field value in the second text box, further comprising:
checking whether each word in the second field value exists in a checking dictionary, wherein the checking dictionary comprises a general field word and a professional field word;
recording the ratio of all checked words to the total word amount; and
and when the proportion value is determined to be larger than the threshold value, the identification is successful.
5. A document reconciliation method, comprising:
identifying a first to-be-detected sheet image of a first sheet using the sheet identification method of any of claims 1-4 to obtain first verification data of the first to-be-detected sheet image;
identifying a second to-be-detected document image of a second document using the document identification method of any of claims 1-4 to obtain second verification data for the second to-be-detected document image;
comparing the first collation data with the second collation data; and
and when the first check data and the second check data are consistent, determining that the first sheet and the second sheet are consistent.
6. A document identification device, comprising:
an image acquisition module: the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a document image to be detected, and the document image to be detected comprises at least one effective area;
a feature extraction module: a first text box and a corresponding first field name for extracting the at least one effective area to form a structured feature;
a document classification module: the document classifier is used for classifying the document image to be detected by using a preset document classifier in response to the structural characteristics so as to determine the document type corresponding to the document image to be detected;
a template calling module: the document template is used for calling the document template of the same type as the determined document type of the document image to be detected; and
a data extraction module: and the verification data is used for acquiring the verification data of the document image to be detected according to the document template and the document image to be detected.
7. A document reconciliation apparatus comprising:
a first identification module: a first to-be-detected sheet image for identifying a first sheet using the sheet identification method as claimed in claim 1 to obtain first collation data of the first to-be-detected sheet image;
a second identification module: identifying a second to-be-detected document image of a second document using the document identification method as claimed in claim 1 to obtain second verification data of the second to-be-detected document image;
a checking module: for comparing said first collation data with said second collation data; and
an output module: and the first sheet and the second sheet are determined to be consistent when the first check data and the second check data are consistent.
8. An electronic device, comprising:
one or more processors;
storage means for storing executable instructions that, when executed by the processor, implement the method of any one of claims 1-5.
9. A computer readable storage medium having stored thereon instructions that, when executed, implement the method of any one of claims 1-5.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-5.
CN202111046477.XA 2021-09-07 2021-09-07 Document identification method, document checking method, device and equipment Pending CN113743327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111046477.XA CN113743327A (en) 2021-09-07 2021-09-07 Document identification method, document checking method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111046477.XA CN113743327A (en) 2021-09-07 2021-09-07 Document identification method, document checking method, device and equipment

Publications (1)

Publication Number Publication Date
CN113743327A true CN113743327A (en) 2021-12-03

Family

ID=78736760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111046477.XA Pending CN113743327A (en) 2021-09-07 2021-09-07 Document identification method, document checking method, device and equipment

Country Status (1)

Country Link
CN (1) CN113743327A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914907A (en) * 2014-04-14 2014-07-09 陕西海基业高科技实业有限公司 Paper bill information identification and checking system and application method
WO2018010657A1 (en) * 2016-07-15 2018-01-18 北京市商汤科技开发有限公司 Structured text detection method and system, and computing device
CN111242034A (en) * 2020-01-14 2020-06-05 支付宝(杭州)信息技术有限公司 Document image processing method and device, processing equipment and client
CN111275102A (en) * 2020-01-19 2020-06-12 深圳壹账通智能科技有限公司 Multi-certificate type synchronous detection method and device, computer equipment and storage medium
CN111931664A (en) * 2020-08-12 2020-11-13 腾讯科技(深圳)有限公司 Mixed note image processing method and device, computer equipment and storage medium
CN113033534A (en) * 2021-03-10 2021-06-25 北京百度网讯科技有限公司 Method and device for establishing bill type identification model and identifying bill type

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914907A (en) * 2014-04-14 2014-07-09 陕西海基业高科技实业有限公司 Paper bill information identification and checking system and application method
WO2018010657A1 (en) * 2016-07-15 2018-01-18 北京市商汤科技开发有限公司 Structured text detection method and system, and computing device
CN111242034A (en) * 2020-01-14 2020-06-05 支付宝(杭州)信息技术有限公司 Document image processing method and device, processing equipment and client
CN111275102A (en) * 2020-01-19 2020-06-12 深圳壹账通智能科技有限公司 Multi-certificate type synchronous detection method and device, computer equipment and storage medium
CN111931664A (en) * 2020-08-12 2020-11-13 腾讯科技(深圳)有限公司 Mixed note image processing method and device, computer equipment and storage medium
CN113033534A (en) * 2021-03-10 2021-06-25 北京百度网讯科技有限公司 Method and device for establishing bill type identification model and identifying bill type

Similar Documents

Publication Publication Date Title
US11816165B2 (en) Identification of fields in documents with neural networks without templates
WO2020233270A1 (en) Bill analyzing method and analyzing apparatus, computer device and medium
CA3027038C (en) Document field detection and parsing
US11195006B2 (en) Multi-modal document feature extraction
US9626555B2 (en) Content-based document image classification
CN109117814B (en) Image processing method, image processing apparatus, electronic device, and medium
US9202146B2 (en) Duplicate check image resolution
US11176361B2 (en) Handwriting detector, extractor, and language classifier
CN112036295B (en) Bill image processing method and device, storage medium and electronic equipment
CN114724166A (en) Title extraction model generation method and device and electronic equipment
CN111368632A (en) Signature identification method and device
CN114971294A (en) Data acquisition method, device, equipment and storage medium
CN114140649A (en) Bill classification method, bill classification device, electronic apparatus, and storage medium
WO2022103564A1 (en) Fraud detection via automated handwriting clustering
CN113450075A (en) Work order processing method and device based on natural language technology
CN111784053A (en) Transaction risk detection method, device and readable storage medium
CN113743327A (en) Document identification method, document checking method, device and equipment
US20230035995A1 (en) Method, apparatus and storage medium for object attribute classification model training
CN114663899A (en) Financial bill processing method, device, equipment and medium
CN115294593A (en) Image information extraction method and device, computer equipment and storage medium
CN114443834A (en) Method and device for extracting license information and storage medium
CN113780116A (en) Invoice classification method and device, computer equipment and storage medium
CN112036465A (en) Image recognition method, device, equipment and storage medium
CN113888760B (en) Method, device, equipment and medium for monitoring violation information based on software application
US20210342901A1 (en) Systems and methods for machine-assisted document input

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination