CN112329708A - Bill identification method and device - Google Patents
Bill identification method and device Download PDFInfo
- Publication number
- CN112329708A CN112329708A CN202011330551.6A CN202011330551A CN112329708A CN 112329708 A CN112329708 A CN 112329708A CN 202011330551 A CN202011330551 A CN 202011330551A CN 112329708 A CN112329708 A CN 112329708A
- Authority
- CN
- China
- Prior art keywords
- key field
- identification result
- machine identification
- result
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000013145 classification model Methods 0.000 claims abstract description 113
- 230000004044 response Effects 0.000 claims abstract description 23
- 238000012795 verification Methods 0.000 claims description 71
- 238000012360 testing method Methods 0.000 claims description 67
- 238000002372 labelling Methods 0.000 claims description 47
- 238000012549 training Methods 0.000 claims description 32
- 230000015654 memory Effects 0.000 claims description 16
- 238000003058 natural language processing Methods 0.000 abstract description 3
- 238000010276 construction Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012550 audit Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/412—Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/413—Classification of content, e.g. text, photographs or tables
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Medical Informatics (AREA)
- Inspection Of Paper Currency And Valuable Securities (AREA)
Abstract
The application relates to the field of image recognition and the field of natural language processing, and discloses a bill recognition method and a bill recognition device: the method comprises the steps of obtaining a bill picture, obtaining a machine identification result of a key field in the bill picture and feature data related to the machine identification result, obtaining confidence degrees of the machine identification result of the key field based on the feature data related to the machine identification result and two classification models corresponding to the key field, enabling the two classification models to correspond to the key field one by one, finally judging the confidence degrees of the machine identification results of the key field, and determining the machine identification result of the key field as the identification result of the key field in the bill picture in response to the fact that the confidence degrees of the machine identification results of the key field meet preset conditions. The confidence of the machine recognition result is related to the characteristic data of the machine recognition result, and the accuracy of the confidence of the machine recognition result is improved, so that the bill recognition result is determined according to the confidence, and the accuracy of the bill recognition result is improved.
Description
Technical Field
The application relates to the technical field of computers, in particular to the field of image recognition and the field of natural language processing, and particularly relates to a bill recognition method and device.
Background
With the continuous progress of science and technology, more and more bills need to be audited, the bills are subjected to machine identification to obtain identification results of key fields in the bills, such as machine identification results of money amount, date, customer name, purposes and the like, and then the bills are audited according to the identified machine identification results. However, many bills have fuzzy shot pictures, different bill formats have large differences, the shooting angles of the pictures are also large differences, and the recognition difficulty of the key fields in the bills is large, so that the accuracy of the machine recognition results of the key fields is low. Therefore, in order to improve the accuracy of the bill identification result, the average identification probability of the key field is used as the confidence of the key field, and the accuracy of the identification result of the key field is improved by improving the confidence threshold of the key field.
However, as the confidence threshold is increased, the accuracy of the recognition result of the key field is increased, but the recall rate of the recognition result of the key field is decreased, so that many correct key fields are recognized.
Disclosure of Invention
The embodiment of the application provides a bill identification method and device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a method for identifying a ticket, where the method includes: acquiring a bill picture, and acquiring a machine identification result of a key field in the bill picture and characteristic data related to the machine identification result; obtaining the confidence coefficient of the machine identification result of the key field based on the feature data associated with the machine identification result and the two classification models corresponding to the key field, wherein the two classification models correspond to the key field one by one; and judging the confidence coefficient of the machine identification result of the key field, and determining the machine identification result of the key field as the identification result of the key field in the bill picture in response to the fact that the confidence coefficient of the machine identification result of the key field meets the preset condition.
In some embodiments, the binary classification model is implemented based on the following steps: acquiring a sample bill picture set, wherein the sample bill picture set comprises a training picture set; obtaining sample characteristic data associated with the machine identification result of the sample key field in the training picture set and a labeling result of the machine identification result of the sample key field, wherein the labeling result of the machine identification result of the sample key field is used for representing whether the machine identification result is correct or not; and training based on the sample characteristic data of the sample key field and the corresponding labeling result to obtain a two-classification model corresponding to the sample key field.
In some embodiments, the binary model further comprises a model identification corresponding to the key field; and obtaining the confidence of the machine identification result of the key field based on the feature data associated with the machine identification result and the two classification models corresponding to the key field, wherein the confidence comprises the following steps: obtaining a model identifier corresponding to the key field, and calling a two-classification model corresponding to the model identifier based on the model identifier; and inputting the characteristic data associated with the machine identification result into the two classification models to obtain the confidence coefficient of the machine identification result of the key field.
In some embodiments, the sample ticket photo set further comprises a verification photo set; the method further comprises the following steps: obtaining verification feature data of a machine identification result of a verification key field in a verification picture set and a labeling result of the machine identification result of the verification key field, wherein the verification key field is the same as the sample key field; and responding to the obtained two classification models corresponding to the key field of the sample, and performing parameter adjustment on the two classification models based on verification feature data and labeling results of machine identification results of the verification key field to obtain the adjusted two classification models.
In some embodiments, the sample ticket picture set further comprises a test picture set; the method further comprises the following steps: acquiring test characteristic data of a machine identification result of a test key field in a test picture set and a labeling result of the machine identification result of the test key field, wherein the test key field is the same as a sample key field; and in response to the obtained adjusted two-classification model, testing the adjusted two-classification model based on the test characteristic data and the labeling result of the machine identification result of the test key field to obtain the accuracy and the recall rate of the adjusted two-classification model.
In some embodiments, the method further comprises: in response to the fact that the confidence coefficient of the machine identification result of the key field does not meet the preset condition, sending the bill picture to which the machine identification result of the key field belongs to the terminal so that a user can verify the bill picture; and determining the identification result of the key field in the bill picture in response to receiving the user verification result returned by the terminal.
In a second aspect, an embodiment of the present application provides a bill identifying device, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a bill picture and acquire a machine identification result of a key field in the bill picture and characteristic data associated with the machine identification result; the classification module is configured to obtain the confidence coefficient of the machine identification result of the key field based on the feature data associated with the machine identification result and a two-classification model corresponding to the key field, wherein the two-classification model corresponds to the key field one by one; the determining module is configured to judge the confidence degree of the machine identification result of the key field, and determine the machine identification result of the key field as the identification result of the key field in the bill picture in response to the fact that the confidence degree of the machine identification result of the key field meets the preset condition.
In some embodiments, the binary classification model is implemented based on the following modules: a second acquisition module configured to acquire a sample ticket photo set, the sample ticket photo set including a training photo set; obtaining sample characteristic data associated with the machine identification result of the sample key field in the training picture set and a labeling result of the machine identification result of the sample key field, wherein the labeling result of the machine identification result of the sample key field is used for representing whether the machine identification result is correct or not; and the training module is configured to perform training based on the sample characteristic data of the sample key field and the corresponding labeling result to obtain a two-classification model corresponding to the sample key field.
In some embodiments, the binary model further comprises a model identification corresponding to the key field; and a classification module comprising: the obtaining unit is configured to obtain a model identifier corresponding to the key field, and call a binary model corresponding to the model identifier based on the model identifier; and the classification unit is configured to input the feature data associated with the machine identification result into the two classification models to obtain the confidence coefficient of the machine identification result of the key field.
In some embodiments, the sample ticket photo set further comprises a verification photo set; the device also includes: the third acquisition module is configured to acquire verification feature data of a machine identification result of the verification key field in the verification picture set and a labeling result of the machine identification result of the verification key field, wherein the verification key field is the same as the sample key field; and the adjusting module is configured to respond to the acquisition of the two classification models corresponding to the key fields of the sample, and carry out parameter adjustment on the two classification models based on the verification feature data and the labeling result of the machine identification result of the verification key fields to obtain the adjusted two classification models.
In some embodiments, the sample ticket picture set further comprises a test picture set; the device also includes: the fourth acquisition module is configured to acquire the test characteristic data of the machine identification result of the test key field in the test picture set and the labeling result of the machine identification result of the test key field, wherein the test key field is the same as the sample key field; and the testing module is configured to respond to the obtained adjusted two-classification model, test the adjusted two-classification model based on the testing characteristic data and the labeling result of the machine identification result of the testing key field, and obtain the accuracy and the recall rate of the adjusted two-classification model.
In some embodiments, the apparatus further comprises: the sending module is configured to respond to the fact that the confidence coefficient of the machine identification result of the key field does not accord with the preset condition, and send the bill picture to which the machine identification result of the key field belongs to the terminal so that the bill picture can be verified by a user; and the determining module is further configured to determine the identification result of the key field in the bill picture in response to receiving the user verification result returned by the terminal.
In a third aspect, an embodiment of the present application provides an electronic device, which includes one or more processors; a storage device having one or more programs stored thereon which, when executed by the one or more processors, cause the one or more processors to implement a method as in any embodiment of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method according to any one of the embodiments of the first aspect.
The method comprises the steps of obtaining a bill picture, obtaining a machine identification result of a key field in the bill picture and characteristic data related to the machine identification result, obtaining confidence degrees of the machine identification result of the key field on the basis of the characteristic data related to the machine identification result and a binary model corresponding to the key field, enabling the binary model to correspond to the key field one by one, finally judging the confidence degrees of the machine identification results of the key field, determining the machine identification result of the key field as the identification result of the key field in the bill picture in response to the fact that the confidence degrees of the machine identification result of the key field accord with a preset condition, enabling the confidence degrees of the machine identification result to be related to the characteristic data of the machine identification result, improving the relevance between the confidence degrees of the machine identification result and each characteristic data, and increasing other data except identification probability, the accuracy of the confidence of the machine recognition result is improved, the bill recognition result is determined according to the confidence, the accuracy of the bill recognition result is improved, meanwhile, on the basis that the accuracy of the machine recognition result is guaranteed, along with the fact that the accuracy of the recognition result is continuously improved, the recall rate of the recognition result is reduced, accordingly, the loss of the correct bill is reduced, and the accuracy and the comprehensiveness of the bill recognition result are improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a document identification method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a bill identification method according to the present application;
FIG. 4 is a flow diagram for one embodiment of obtaining a two-class model according to the present application;
FIG. 5 is a flow diagram of another embodiment of obtaining a two-class model according to the present application;
FIG. 6 is a schematic view of one embodiment of a document identification device according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the ticket identification method of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 104, 105, a network 106, and servers 101, 102, 103. The network 106 serves as a medium for providing communication links between the terminal devices 104, 105 and the servers 101, 102, 103. Network 106 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the servers 101, 102, 103 via the network 106 via the terminal devices 104, 105 to receive or transmit information or the like. The end devices 104, 105 may have installed thereon various applications such as data processing applications, instant messaging tools, social platform software, search-type applications, shopping-type applications, and the like.
The terminal devices 104, 105 may be hardware or software. When the terminal device is hardware, it may be various electronic devices having a display screen and supporting communication with the server, including but not limited to a smart phone, a tablet computer, a laptop portable computer, a desktop computer, and the like. When the terminal device is software, the terminal device can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.
The servers 101, 102, 103 may be servers that provide various services, such as background servers that receive requests sent by terminal devices with which communication connections are established. The background server can receive and analyze the request sent by the terminal device, and generate a processing result.
The servers 101, 102, and 103 may implement auditing of the bill pictures by performing picture recognition and natural language processing on the acquired bill pictures, and obtain recognition results corresponding to the bill pictures, that is, at least one bill picture may be acquired through the network 106, the acquired bill pictures are recognized, for example, by performing OCR recognition, so as to obtain the machine recognition result of the key field in the bill picture and feature data associated with the machine recognition result, and based on the feature data associated with the machine recognition result and the binary classification model corresponding to the key field, a confidence level of the machine recognition result of the key field is obtained, a judgment is performed according to the confidence level, and if it is determined that the confidence level of the machine recognition result of the key field meets a preset condition, the machine recognition result of the key field is determined as the recognition result of the key field in the bill picture.
The server may be hardware or software. When the server is hardware, it may be various electronic devices that provide various services to the terminal device. When the server is software, it may be implemented as a plurality of software or software modules for providing various services to the terminal device, or may be implemented as a single software or software module for providing various services to the terminal device. And is not particularly limited herein.
Note that the ticket identification method provided by the embodiment of the present disclosure may be executed by the servers 101, 102, and 103. Accordingly, the ticket recognition apparatus may be provided in the server 101, 102, 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring to fig. 2, fig. 2 shows a schematic flow diagram 200 of an embodiment of a ticket identification method that can be applied to the present application. The bill identification method comprises the following steps:
In this embodiment, a user may photograph the ticket content to obtain a ticket picture, and then upload the photographed ticket picture to a terminal for auditing. An executive (e.g., servers 101, 102, 103 in fig. 1) may obtain a ticket picture uploaded by a user from a terminal through a network interface, where the ticket picture may include at least one key field to be audited, such as an amount of money in the ticket, a date, a customer name and a purpose.
The execution subject may acquire the machine identification result of the key field in the bill picture and feature data associated with the machine identification result by performing machine identification, key field detection, construction data, and the like on the acquired bill picture, where the feature data is used to characterize data related to the accuracy of the machine identification result of the key field and to determine the confidence of the key field, the feature data may include feature data other than the identification probability of the machine identification result, such as the identification probability of the machine identification result, the detection probability of the key field, and construction data corresponding to the key field, and the like, and the construction data may be data jointly composed of the key field and other key fields related to the key field.
That is, the execution subject may perform machine recognition on the acquired ticket image, and acquire a machine recognition result of the key field in the ticket image and a recognition probability of the machine recognition result. And the execution main body can also detect the key fields in the bill pictures to acquire the detection probability of the key fields. Meanwhile, the execution main body can also jointly construct the construction data corresponding to the key field according to the key field in the bill picture and other key fields associated with the key field, for example, the key field is the amount of money in the bill, and the construction data of the key field can be an equation made by using the amount of money in the bill. The execution subject may use the obtained recognition probability of the machine recognition result, the detection probability of the key field, and the configuration data together as feature data associated with the machine recognition result of the key field.
And step 220, obtaining the confidence of the machine identification result of the key field based on the feature data associated with the machine identification result and the two classification models corresponding to the key field.
In this embodiment, after obtaining the machine identification result of the key field and the feature data associated with the machine identification result, the execution subject obtains, according to the key field, two classification models corresponding to the key field, where the two classification models correspond to the key field one by one, and different key fields correspond to different two classification models, and each two classification model is configured to output a confidence level of the machine identification result of the key field according to the feature data of the machine identification result of the key field. The execution main body can obtain the confidence degree of the machine identification result of the key field according to the feature data associated with the machine identification result of the key field and the two classification models corresponding to the key field.
For example, the key field is a money amount, the execution main body further acquires a binary model corresponding to the money amount after acquiring the feature data of the machine identification result of the money amount, and the execution main body acquires the confidence level of the machine identification result of the money amount according to the feature data and the binary model corresponding to the money amount.
And step 230, judging the confidence coefficient of the machine identification result of the key field, and determining the machine identification result of the key field as the identification result of the key field in the bill picture in response to the fact that the confidence coefficient of the machine identification result of the key field meets the preset condition.
In this embodiment, after obtaining the confidence level of the machine identification result of the key field, the execution subject determines the confidence level, and determines whether the confidence level meets a preset condition, where the preset condition is used to screen the confidence level meeting the auditing requirement, where the confidence level of the key field may be greater than or equal to a confidence level threshold, and the confidence level threshold may be set according to actual needs, which is not specifically limited in this application. When the confidence coefficient of the machine identification result of the key field is determined to meet the preset condition, the execution main body can determine that the machine identification result of the key field is correct, and then the machine identification result of the key field is determined as the identification result of the key field in the bill picture.
Optionally, if one key field in the document image includes one key field, the execution main body may determine the confidence level of the machine identification result of the one key field, and when the confidence level of the machine identification result of the key field satisfies a preset condition, the execution main body may directly determine the machine identification result as the final identification result of the key field.
Optionally, if the key fields in the document image include a plurality of key fields, the execution main body may respectively determine the confidence level of the machine identification result of each key field, and when the confidence levels of the machine identification results of all the key fields satisfy the preset condition, the execution main body may directly determine the machine identification result of each key field as the final identification result of the key field in the document image. When the confidence of the machine identification results of part of key fields meets the preset condition, the execution main body determines that the machine identification results of the key fields have incorrect identification results, and the current machine identification result cannot be determined as the identification result of the key fields in the bill picture.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the bill identifying method according to the present embodiment.
In the application scenario of fig. 3, a user uploads a ticket picture on the terminal 301, and the terminal 301 sends the ticket picture to the server 302. The server 302 performs machine recognition on the bill picture, and obtains a machine recognition result of the "amount" in the bill picture and feature data of the machine recognition result. Then, the server 302 obtains the confidence level of the machine identification result of the amount according to the feature data associated with the machine identification result and the binary model corresponding to the amount, finally, the server 302 judges the confidence level of the machine identification result of the amount, if the confidence level of the machine identification result of the amount is determined to meet the preset condition, the machine identification result of the amount is determined to be the identification result of the amount in the bill picture, and the server 302 sends the identification result of the amount to the terminal 301.
The bill identifying method provided by the embodiment of the disclosure obtains the confidence level of the machine identifying result of the key field by obtaining the bill picture, obtaining the machine identifying result of the key field in the bill picture and the feature data associated with the machine identifying result, and obtaining the confidence level of the machine identifying result of the key field based on the feature data associated with the machine identifying result and the two classification models corresponding to the key field, wherein the two classification models are in one-to-one correspondence with the key field, and finally, judging the confidence level of the machine identifying result of the key field, and determining the machine identifying result of the key field as the identifying result of the key field in the bill picture in response to the fact that the confidence level of the machine identifying result of the key field meets the preset condition, so that the confidence level of the machine identifying result is related to the feature data of the machine identifying result, and the relevance between the confidence level of the machine, the method has the advantages that other data except the recognition probability are added, the accuracy of the confidence coefficient of the machine recognition result is improved, the bill recognition result is determined according to the confidence coefficient, the accuracy of the bill recognition result is improved, meanwhile, on the basis of guaranteeing the accuracy of the machine recognition result, along with the fact that the accuracy of the recognition result is continuously improved, the recall rate of the recognition result is reduced, accordingly, the loss of the correct bill is reduced, and the accuracy and the comprehensiveness of the bill recognition result are improved.
With further reference to FIG. 4, a flow 400 of one embodiment of obtaining a classification model is shown. The process 400 may include the following steps:
In this step, the execution subject may obtain a sample bill picture set in advance, where the sample bill picture set may include a plurality of sample bill pictures, and the execution subject may group the plurality of sample bill pictures in the obtained sample bill picture set to obtain a training picture set, where the training picture is used for training the binary model.
In this step, the execution subject performs machine recognition, key field detection, data construction, and the like on the sample bill pictures in the training picture set, respectively, to obtain sample feature data of the machine recognition result of the sample key field in the sample bill pictures. Marking the machine identification result of the sample key field in each sample bill picture to obtain a marking result for representing whether the machine identification result of the sample key field is correct or not, wherein the marking result can be expressed in a numerical value form, for example, if the machine identification result of the sample key field is correct, the marking result is 1; and if the machine identification result of the sample key field is wrong, marking the result as 0.
And 430, training based on the sample characteristic data of the sample key field and the corresponding labeling result to obtain a two-classification model corresponding to the sample key field.
In this step, the execution subject may use the sample key fields belonging to the same type in each sample document picture as a training data set, where each training data set includes sample feature data and a labeling result of the machine identification result of the sample key fields of the same type. And then the execution subject can respectively train the initial neural network according to the sample characteristic data and the labeling result of the machine identification result of the sample key field in each training data set to obtain a two-classification model corresponding to the type of the sample key field.
As an example, the executing entity may use the sample feature data and the labeling result of the machine recognition result of the amount in each sample bill picture as a training data set, and then train the initial neural network according to the sample feature data and the labeling result of the machine recognition result of the amount included in the training data set, so as to obtain a binary model corresponding to the amount.
In the implementation mode, different binary models are obtained by taking the feature data of the machine recognition result of the key field as the training sample, data except recognition probability is increased, the correlation among the feature data is considered, the accuracy of the binary model corresponding to the key field is improved, and therefore the accuracy of the confidence coefficient of the machine recognition result determined based on the binary model is improved.
As an optional implementation manner, the two classification models correspond to the key fields one to one, and then the two classification models may further include model identifications corresponding to the key fields and model names corresponding to the key fields, and so on, and then the model identifications and model names of the two classification models correspond to the key fields one to one, for example, the two classification models may include a money model, a date model, and so on. And step 220, obtaining the confidence of the machine identification result of the key field based on the feature data associated with the machine identification result and the two classification models corresponding to the key field, and can be implemented based on the following steps:
the method comprises the steps of firstly, obtaining a model identification corresponding to a key field, and calling a two-classification model corresponding to the model identification based on the model identification.
In this step, the execution subject may associate and store the obtained binary model and the model identifier and the model name corresponding to the binary model. And then, after acquiring the feature data of the machine identification result of the key field, the execution main body acquires the model identification and the model name corresponding to the key field based on the key field. And the execution main body calls the two classification models corresponding to the model identifications according to the model identifications and the model names. As an example, after the execution subject acquires the feature data of the machine recognition result of the amount of money, the execution subject further acquires a model identifier of a binary model corresponding to the amount of money, and then calls the amount model according to the model identifier.
And secondly, inputting the characteristic data associated with the machine identification result into a binary model to obtain the confidence coefficient of the machine identification result of the key field.
In this step, after obtaining the two-classification model corresponding to the key field, the execution body inputs the feature data associated with the machine identification result into the two-classification model, that is, the identification probability and the detection probability corresponding to the key field, the structural data corresponding to the key field, and the like may be input into the two-classification model, so as to obtain the confidence of the machine identification result of the key field.
In the implementation mode, the model identification and the model name are set for the two classification models, and the two classification models are called according to the model identification and the model name, so that the model using efficiency is improved, the classification accuracy of the two classification models is improved, and the accuracy of the confidence coefficient of the determined machine recognition result is improved.
With further reference to FIG. 5, a flow 500 of another embodiment of obtaining a classification model is shown. The process 500 may include the following steps:
In this step, the execution subject may group a plurality of sample ticket pictures in the obtained sample ticket picture set to obtain a verification picture set, where the verification picture set is used to perform parameter adjustment on the obtained binary model. The execution main body respectively carries out machine identification, key field detection, data construction and the like on the sample bill pictures in the verification picture set to obtain sample characteristic data of a machine identification result of the verification key field in the sample bill pictures, wherein the verification key field is the same as the sample key field and belongs to the same type of key field. Marking the machine identification result of the verification key field in each sample bill picture to obtain a marking result for representing whether the machine identification result of the verification key field is correct or not, wherein the marking result can be expressed in a numerical value form, for example, if the machine identification result of the verification key field is correct, the marking result is 1; and if the machine identification result of the verification key field is wrong, marking the result as 0.
And step 520, responding to the obtained two classification models corresponding to the key fields of the sample, and performing parameter adjustment on the two classification models based on the verification feature data and the labeling result of the machine identification result of the verification key fields to obtain the adjusted two classification models.
In this step, after the execution subject obtains the two classification models corresponding to the key fields of the sample, the two classification models are subjected to secondary training according to the verification feature data and the labeling result of the machine identification result of the verification key fields to obtain the adjusted two classification models, so that the parameters of the two classification models are adjusted, and the result output by the adjusted two classification models is more accurate.
In this step, the execution subject may group a plurality of sample ticket pictures in the obtained sample ticket picture set to obtain a test picture set, where the test picture set is used to perform accuracy testing on the obtained binary model. The execution main body respectively carries out machine identification, key field detection, data construction and the like on the sample bill pictures in the test picture set to obtain sample characteristic data of a machine identification result of the test key field in the sample bill pictures, wherein the test key field is the same as the sample key field and belongs to the same type of key field. Marking the machine identification result of the testing key field in each sample bill picture to obtain a marking result for representing whether the machine identification result of the testing key field is correct or not, wherein the marking result can be expressed in a numerical value form, for example, if the machine identification result of the testing key field is correct, the marking result is 1; and if the machine identification result of the test key field is wrong, marking the result as 0.
And 540, in response to the obtained adjusted two-classification model, testing the adjusted two-classification model based on the test characteristic data and the labeling result of the machine identification result of the test key field to obtain the accuracy and the recall rate of the adjusted two-classification model.
In this step, after the execution subject obtains the adjusted two-classification model after the verification picture set is adjusted, the adjusted two-classification model is tested according to the verification feature data and the labeling result of the machine identification result of the test key field, so as to obtain the accuracy and the recall rate of the adjusted two-classification model.
The execution subject may calculate the formula by the accuracy: and calculating the accuracy of the machine identification result of the key field on the basis of the confidence obtained by the adjusted two-classification model (i.e., > the actual correct number of the bills in the bills with the confidence threshold value/> -the number of the bills with the confidence threshold value).
The execution subject can also calculate the formula through the recall ratio: and calculating the recall rate of the machine identification result of the key field on the basis of the confidence coefficient obtained by the adjusted binary model (i.e., > the actual correct number of the bills in the bills of the confidence coefficient threshold value/the total number of the bills).
In the embodiment, the parameter calling is performed on the two-classification model through the verification training set, so that the classification accuracy of the two-classification model is improved, and the accuracy of the confidence coefficient of the determined machine recognition result is improved; and determining the accuracy and the recall rate of the adjusted two classification models through the test picture set, so that the influence degree of the two classification models on the bill identification result can be determined.
As an optional implementation manner, the bill identifying method may further include the steps of: in response to the fact that the confidence coefficient of the machine identification result of the key field does not meet the preset condition, sending the bill picture to which the machine identification result of the key field belongs to the terminal so that a user can verify the bill picture; and determining the identification result of the key field in the bill picture in response to receiving the user verification result returned by the terminal.
Specifically, after obtaining the confidence level of the machine identification result of the key field, the execution subject determines the confidence level, and determines whether the confidence level meets a preset condition, where the preset condition is used to screen the confidence level meeting the auditing requirement, where the confidence level of the key field may be greater than or equal to a confidence level threshold, and the confidence level threshold may be set according to actual needs, which is not specifically limited in this application. When the confidence coefficient of the machine identification result of the key field is determined to meet the non-preset condition, the execution main body can determine that the machine identification result of the key field is wrong, and the bill picture to which the machine identification result of the key field belongs is sent to the terminal. The terminal presents the received bill picture to the user so that the user can verify and audit the bill picture, and then the terminal sends the verification result submitted by the user to the execution main body. And the execution main body receives a user verification result returned by the terminal, and determines the identification result of the key field in the bill picture according to the received verification result.
In the implementation mode, the bill picture with the identification error is screened out by setting the confidence coefficient threshold value, so that the bill picture with the identification error enters manual examination and verification, and the accuracy and the efficiency of bill examination and verification are improved.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of a bill identifying apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the bill identifying apparatus 600 of the present embodiment includes: a first obtaining module 610, a classifying module 620 and a determining module 630.
The first obtaining module 610 is configured to obtain a bill picture, and obtain a machine identification result of a key field in the bill picture and feature data associated with the machine identification result;
the classification module 620 is configured to obtain a confidence of the machine recognition result of the key field based on the feature data associated with the machine recognition result and a two-classification model corresponding to the key field, where the two-classification model corresponds to the key field one to one;
the determining module 630 is configured to judge a confidence of the machine identification result of the key field, and determine the machine identification result of the key field as the identification result of the key field in the bill picture in response to determining that the confidence of the machine identification result of the key field meets a preset condition.
In some optional ways of this embodiment, the binary model is implemented based on the following modules: a second acquisition module configured to acquire a sample ticket photo set, the sample ticket photo set including a training photo set; obtaining sample characteristic data associated with the machine identification result of the sample key field in the training picture set and a labeling result of the machine identification result of the sample key field, wherein the labeling result of the machine identification result of the sample key field is used for representing whether the machine identification result is correct or not; and the training module is configured to perform training based on the sample characteristic data of the sample key field and the corresponding labeling result to obtain a two-classification model corresponding to the sample key field.
In some optional manners of this embodiment, the two-classification model further includes a model identifier corresponding to the key field; and a classification module comprising: the obtaining unit is configured to obtain a model identifier corresponding to the key field, and call a binary model corresponding to the model identifier based on the model identifier; and the classification unit is configured to input the feature data associated with the machine identification result into the two classification models to obtain the confidence coefficient of the machine identification result of the key field.
In some optional manners of this embodiment, the sample ticket photo set further includes a verification photo set; the device also includes: the third acquisition module is configured to acquire verification feature data of a machine identification result of the verification key field in the verification picture set and a labeling result of the machine identification result of the verification key field, wherein the verification key field is the same as the sample key field; and the adjusting module is configured to respond to the acquisition of the two classification models corresponding to the key fields of the sample, and carry out parameter adjustment on the two classification models based on the verification feature data and the labeling result of the machine identification result of the verification key fields to obtain the adjusted two classification models.
In some optional manners of this embodiment, the sample ticket picture set further includes a test picture set; the device also includes: the fourth acquisition module is configured to acquire the test characteristic data of the machine identification result of the test key field in the test picture set and the labeling result of the machine identification result of the test key field, wherein the test key field is the same as the sample key field; and the testing module is configured to respond to the obtained adjusted two-classification model, test the adjusted two-classification model based on the testing characteristic data and the labeling result of the machine identification result of the testing key field, and obtain the accuracy and the recall rate of the adjusted two-classification model.
In some optional manners of this embodiment, the apparatus further includes: the sending module is configured to respond to the fact that the confidence coefficient of the machine identification result of the key field does not accord with the preset condition, and send the bill picture to which the machine identification result of the key field belongs to the terminal so that the bill picture can be verified by a user; and the determining module is further configured to determine the identification result of the key field in the bill picture in response to receiving the user verification result returned by the terminal.
The bill identifying device provided by the embodiment of the disclosure obtains the bill picture, obtains the machine identifying result of the key field in the bill picture and the feature data associated with the machine identifying result, obtains the confidence level of the machine identifying result of the key field based on the feature data associated with the machine identifying result and the two classification models corresponding to the key field, and the two classification models are in one-to-one correspondence with the key field, finally judges the confidence level of the machine identifying result of the key field, and determines the machine identifying result of the key field as the identifying result of the key field in the bill picture in response to the fact that the confidence level of the machine identifying result of the key field meets the preset condition, so that the confidence level of the machine identifying result is related to the feature data of the machine identifying result, and improves the relevance between the confidence level of the machine identifying result and each feature data, the method has the advantages that other data except the recognition probability are added, the accuracy of the confidence coefficient of the machine recognition result is improved, the bill recognition result is determined according to the confidence coefficient, the accuracy of the bill recognition result is improved, meanwhile, on the basis of guaranteeing the accuracy of the machine recognition result, along with the fact that the accuracy of the recognition result is continuously improved, the recall rate of the recognition result is reduced, accordingly, the loss of the correct bill is reduced, and the accuracy and the comprehensiveness of the bill recognition result are improved.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to the bill identifying method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. The storage stores instructions executable by at least one processor to cause the at least one processor to execute the bill identifying method provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the ticket identification method provided herein.
The memory 702, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the ticket recognition method in the embodiments of the present application (e.g., the first acquiring module 610, the classifying module 620, and the determining module 630 shown in fig. 6). The processor 701 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 702, that is, implements the ticket recognition method in the above-described method embodiment.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for data push, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 702 may optionally include memory located remotely from processor 701, which may be connected to a data-pushing electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the ticket recognition method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the data-pushed electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the confidence coefficient of the machine identification result of the key field in the bill picture is obtained by obtaining the bill picture, obtaining the machine identification result of the key field in the bill picture and the feature data related to the machine identification result, and obtaining the two classification models corresponding to the key field on the basis of the feature data related to the machine identification result and the two classification models corresponding to the key field, wherein the two classification models correspond to the key field one by one, finally, the confidence coefficient of the machine identification result of the key field is judged, and the machine identification result of the key field is determined as the identification result of the key field in the bill picture in response to the fact that the confidence coefficient of the machine identification result of the key field meets the preset condition, so that the confidence coefficient of the machine identification result is related to the feature data of the machine identification result, and the relevance between the confidence, the method has the advantages that other data except the recognition probability are added, the accuracy of the confidence coefficient of the machine recognition result is improved, the bill recognition result is determined according to the confidence coefficient, the accuracy of the bill recognition result is improved, meanwhile, on the basis of guaranteeing the accuracy of the machine recognition result, along with the fact that the accuracy of the recognition result is continuously improved, the recall rate of the recognition result is reduced, accordingly, the loss of the correct bill is reduced, and the accuracy and the comprehensiveness of the bill recognition result are improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (14)
1. A method of ticket identification, comprising:
acquiring a bill picture, and acquiring a machine identification result of a key field in the bill picture and characteristic data associated with the machine identification result;
obtaining the confidence coefficient of the machine identification result of the key field based on the feature data associated with the machine identification result and the two classification models corresponding to the key field, wherein the two classification models correspond to the key field one by one;
and determining the machine recognition result of the key field as the recognition result of the key field in the bill picture in response to the fact that the confidence coefficient of the machine recognition result of the key field meets the preset condition.
2. The method of claim 1, wherein the two classification models are implemented based on:
acquiring a sample bill picture set, wherein the sample bill picture set comprises a training picture set;
obtaining sample characteristic data associated with a machine identification result of a sample key field in the training picture set and a labeling result of the machine identification result of the sample key field, wherein the labeling result of the machine identification result of the sample key field is used for representing whether the machine identification result is correct or not;
and training based on the sample characteristic data of the sample key field and the corresponding labeling result to obtain a two-classification model corresponding to the sample key field.
3. The method of claim 1 or 2, wherein the two classification models further comprise a model identification corresponding to a key field; and obtaining the confidence of the machine identification result of the key field based on the feature data associated with the machine identification result and the two classification models corresponding to the key field, wherein the confidence comprises:
obtaining a model identifier corresponding to the key field, and calling a two-classification model corresponding to the model identifier based on the model identifier;
and inputting the feature data associated with the machine identification result into the two-classification model to obtain the confidence coefficient of the machine identification result of the key field.
4. The method of claim 2, wherein the sample ticket photo collection further comprises a verification photo collection; the method further comprises the following steps:
obtaining verification feature data of a machine identification result of a verification key field in the verification picture set and a labeling result of the machine identification result of the verification key field, wherein the verification key field is the same as the sample key field;
and responding to the obtained two classification models corresponding to the sample key field, and performing parameter adjustment on the two classification models based on verification feature data and labeling results of machine identification results of the verification key field to obtain the adjusted two classification models.
5. The method of claim 4, wherein the sample ticket photo collection further comprises a test photo collection; the method further comprises the following steps:
obtaining test feature data of a machine identification result of a test key field in the test picture set and a labeling result of the machine identification result of the test key field, wherein the test key field is the same as the sample key field;
and in response to the obtained adjusted two-classification model, testing the adjusted two-classification model based on the test characteristic data and the labeling result of the machine identification result of the test key field to obtain the accuracy and the recall rate of the adjusted two-classification model.
6. The method of claim 1, wherein the method further comprises:
in response to the fact that the confidence coefficient of the machine identification result of the key field is determined to be not in accordance with the preset condition, sending the bill picture to which the machine identification result of the key field belongs to a terminal so that a user can verify the bill picture;
and determining the identification result of the key field in the bill picture in response to receiving the user verification result returned by the terminal.
7. A bill identifying apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a bill picture and acquire a machine identification result of a key field in the bill picture and characteristic data associated with the machine identification result;
the classification module is configured to obtain a confidence coefficient of the machine identification result of the key field based on feature data associated with the machine identification result and two classification models corresponding to the key field, wherein the two classification models are in one-to-one correspondence with the key field;
the determining module is configured to judge the confidence degree of the machine identification result of the key field, and determine the machine identification result of the key field as the identification result of the key field in the bill picture in response to the fact that the confidence degree of the machine identification result of the key field meets a preset condition.
8. The apparatus of claim 7, wherein the two classification models are implemented based on:
a second acquisition module configured to acquire a sample ticket photo set, the sample ticket photo set comprising a training photo set; obtaining sample characteristic data associated with a machine identification result of a sample key field in the training picture set and a labeling result of the machine identification result of the sample key field, wherein the labeling result of the machine identification result of the sample key field is used for representing whether the machine identification result is correct or not;
and the training module is configured to perform training based on the sample characteristic data of the sample key field and the corresponding labeling result to obtain a two-classification model corresponding to the sample key field.
9. The apparatus of claim 7 or 8, wherein the two classification models further comprise a model identification corresponding to a key field; and the classification module comprising:
an obtaining unit configured to obtain a model identifier corresponding to the key field, and call a binary model corresponding to the model identifier based on the model identifier;
and the classification unit is configured to input the feature data associated with the machine identification result into the two classification models to obtain the confidence of the machine identification result of the key field.
10. The apparatus of claim 8, wherein the sample ticket photo set further comprises a verification photo set; the device further comprises:
a third obtaining module, configured to obtain verification feature data of a machine identification result of a verification key field in the verification picture set and a labeling result of the machine identification result of the verification key field, where the verification key field is the same as the sample key field;
and the adjusting module is configured to respond to the acquisition of the two classification models corresponding to the sample key field, and perform parameter adjustment on the two classification models based on the verification feature data and the labeling result of the machine identification result of the verification key field to obtain the adjusted two classification models.
11. The apparatus of claim 8, wherein the sample ticket photo collection further comprises a test photo collection; the device further comprises:
a fourth obtaining module, configured to obtain test feature data of a machine identification result of a test key field in the test picture set and a labeling result of the machine identification result of the test key field, where the test key field is the same as the sample key field;
and the testing module is configured to respond to the obtained adjusted two-classification model, test the adjusted two-classification model based on the testing characteristic data and the labeling result of the machine identification result of the testing key field, and obtain the accuracy and the recall rate of the adjusted two-classification model.
12. The apparatus of claim 7, wherein the apparatus further comprises:
the sending module is configured to respond to the fact that the confidence degree of the machine identification result of the key field does not accord with the preset condition, and send the bill picture to which the machine identification result of the key field belongs to a terminal so that a user can verify the bill picture;
the determining module is further configured to determine the identification result of the key field in the bill picture in response to receiving the user verification result returned by the terminal.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory is stored with instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011330551.6A CN112329708B (en) | 2020-11-24 | 2020-11-24 | Bill identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011330551.6A CN112329708B (en) | 2020-11-24 | 2020-11-24 | Bill identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112329708A true CN112329708A (en) | 2021-02-05 |
CN112329708B CN112329708B (en) | 2024-08-06 |
Family
ID=74322374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011330551.6A Active CN112329708B (en) | 2020-11-24 | 2020-11-24 | Bill identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112329708B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990035A (en) * | 2021-03-23 | 2021-06-18 | 北京百度网讯科技有限公司 | Text recognition method, device, equipment and storage medium |
CN117688424A (en) * | 2023-11-24 | 2024-03-12 | 华南师范大学 | Method, system, device and medium for classifying teaching data generated by retrieval enhancement |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019057311A (en) * | 2018-11-28 | 2019-04-11 | 株式会社東芝 | Ledger sheet information recognition device and ledger sheet information recognition method |
CN109800761A (en) * | 2019-01-25 | 2019-05-24 | 厦门商集网络科技有限责任公司 | Method and terminal based on deep learning model creation paper document structural data |
WO2019174130A1 (en) * | 2018-03-14 | 2019-09-19 | 平安科技(深圳)有限公司 | Bill recognition method, server, and computer readable storage medium |
US20200126545A1 (en) * | 2018-10-17 | 2020-04-23 | Fmr Llc | Automated Execution of Computer Software Based Upon Determined Empathy of a Communication Participant |
WO2020155763A1 (en) * | 2019-01-28 | 2020-08-06 | 平安科技(深圳)有限公司 | Ocr recognition method and electronic device thereof |
CN111597958A (en) * | 2020-05-12 | 2020-08-28 | 西安网算数据科技有限公司 | Highly automated bill classification method and system |
CN111626279A (en) * | 2019-10-15 | 2020-09-04 | 西安网算数据科技有限公司 | Negative sample labeling training method and highly-automated bill identification method |
-
2020
- 2020-11-24 CN CN202011330551.6A patent/CN112329708B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019174130A1 (en) * | 2018-03-14 | 2019-09-19 | 平安科技(深圳)有限公司 | Bill recognition method, server, and computer readable storage medium |
US20200126545A1 (en) * | 2018-10-17 | 2020-04-23 | Fmr Llc | Automated Execution of Computer Software Based Upon Determined Empathy of a Communication Participant |
JP2019057311A (en) * | 2018-11-28 | 2019-04-11 | 株式会社東芝 | Ledger sheet information recognition device and ledger sheet information recognition method |
CN109800761A (en) * | 2019-01-25 | 2019-05-24 | 厦门商集网络科技有限责任公司 | Method and terminal based on deep learning model creation paper document structural data |
WO2020155763A1 (en) * | 2019-01-28 | 2020-08-06 | 平安科技(深圳)有限公司 | Ocr recognition method and electronic device thereof |
CN111626279A (en) * | 2019-10-15 | 2020-09-04 | 西安网算数据科技有限公司 | Negative sample labeling training method and highly-automated bill identification method |
CN111597958A (en) * | 2020-05-12 | 2020-08-28 | 西安网算数据科技有限公司 | Highly automated bill classification method and system |
Non-Patent Citations (2)
Title |
---|
吴健辉;张国云;杨坤涛;: "基于置信度的多分类器互补集成手写数字识别", 计算机工程与应用, no. 30, 21 October 2007 (2007-10-21) * |
许亚美;卢朝阳;李静;姚超;: "手写维文字符分割中的多信息融合路径寻优方法", 西安交通大学学报, no. 08, 21 May 2013 (2013-05-21) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990035A (en) * | 2021-03-23 | 2021-06-18 | 北京百度网讯科技有限公司 | Text recognition method, device, equipment and storage medium |
CN112990035B (en) * | 2021-03-23 | 2023-10-31 | 北京百度网讯科技有限公司 | Text recognition method, device, equipment and storage medium |
CN117688424A (en) * | 2023-11-24 | 2024-03-12 | 华南师范大学 | Method, system, device and medium for classifying teaching data generated by retrieval enhancement |
Also Published As
Publication number | Publication date |
---|---|
CN112329708B (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112509690B (en) | Method, apparatus, device and storage medium for controlling quality | |
US20230119593A1 (en) | Method and apparatus for training facial feature extraction model, method and apparatus for extracting facial features, device, and storage medium | |
CN111598164B (en) | Method, device, electronic equipment and storage medium for identifying attribute of target object | |
CN111209977A (en) | Method, apparatus, device and medium for training and using classification model | |
CN111611990B (en) | Method and device for identifying tables in images | |
CN112507090B (en) | Method, apparatus, device and storage medium for outputting information | |
CN111753744B (en) | Method, apparatus, device and readable storage medium for bill image classification | |
CN111460384B (en) | Policy evaluation method, device and equipment | |
CN114428677B (en) | Task processing method, processing device, electronic equipment and storage medium | |
CN111460292B (en) | Model evaluation method, device, equipment and medium | |
US20220027854A1 (en) | Data processing method and apparatus, electronic device and storage medium | |
CN112329708B (en) | Bill identification method and device | |
CN113627361B (en) | Training method and device for face recognition model and computer program product | |
CN114861886A (en) | Quantification method and device of neural network model | |
CN112380392A (en) | Method, apparatus, electronic device and readable storage medium for classifying video | |
CN110852780A (en) | Data analysis method, device, equipment and computer storage medium | |
CN111782785B (en) | Automatic question and answer method, device, equipment and storage medium | |
CN111783427B (en) | Method, device, equipment and storage medium for training model and outputting information | |
CN111241225B (en) | Method, device, equipment and storage medium for judging change of resident area | |
CN113379059A (en) | Model training method for quantum data classification and quantum data classification method | |
CN111967304A (en) | Method and device for acquiring article information based on edge calculation and settlement table | |
CN112115334B (en) | Method, device, equipment and storage medium for distinguishing network community hot content | |
CN110889392B (en) | Method and device for processing face image | |
CN111709480A (en) | Method and device for identifying image category | |
CN111552829A (en) | Method and apparatus for analyzing image material |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |