CN112069900A - Bill character recognition method and system based on convolutional neural network - Google Patents

Bill character recognition method and system based on convolutional neural network Download PDF

Info

Publication number
CN112069900A
CN112069900A CN202010780781.6A CN202010780781A CN112069900A CN 112069900 A CN112069900 A CN 112069900A CN 202010780781 A CN202010780781 A CN 202010780781A CN 112069900 A CN112069900 A CN 112069900A
Authority
CN
China
Prior art keywords
bill
neural network
convolutional neural
characters
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010780781.6A
Other languages
Chinese (zh)
Inventor
徐江
梁昊
张金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu Institute of Technology
Original Assignee
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshu Institute of Technology filed Critical Changshu Institute of Technology
Priority to CN202010780781.6A priority Critical patent/CN112069900A/en
Publication of CN112069900A publication Critical patent/CN112069900A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The invention discloses a bill character recognition method and a bill character recognition system based on a convolutional neural network, which comprise the steps of 1, image preprocessing, trading list RGB image, utilizing an OpenCV (open content description language) library carried by Python, and introducing image graying to change three channels into a single channel; adopting a binarization processing technology to change the gray value of each pixel point of the picture matrix into 0 or 255; step 2, positioning the field and segmenting the character, determining the coordinate position of the key field of the bill and intercepting the coordinate position; each form is provided with a fixed template, and if the position of a certain fixed identifier can be found on the picture, the coordinates of the key fields are obtained through relative positioning; and 3, training by adopting a convolutional neural network. The method adopts the convolutional neural network, realizes a bill key field identification system aiming at Siemens trade invoices, effectively solves the problem of complex processing of the traditional paper bills, and can automatically extract and identify the key field of the bill.

Description

Bill character recognition method and system based on convolutional neural network
Technical Field
The invention relates to the field of neural networks, in particular to a method and a system for identifying bill characters based on a convolutional neural network.
Background
OCR Recognition technology (Optical Character Recognition) is the basis of bill Character Recognition, i.e. the technology of converting Optical characters on a picture into editable characters. The concept of OCR was first introduced by german scientists in the 30 s of the 20 th century, where the first OCR character recognition software was able to recognize 120 english letters in one second. China starts to research on OCR technology later, and the recognition of numbers, English letters and symbols is researched only in the 70 s.
For many years, with the research of OCR technology advancing, many domestic companies begin to research the character recognition technology in the fields of tax, finance and the like, and many professional companies successively research and develop products with strong pertinence and fast speed: the Tencent cloud OCR adopts a deep learning technology, can automatically position characters and convert picture characters into editable texts; baidu OCR also utilizes deep learning network, realizes the function similar to Tencent cloud OCR, and both recognition effects are considerable. In addition, like the automatic recognition system of the Hanwang bank bill, the key information can be automatically extracted and handwritten or printed numbers can be efficiently recognized. And a rapid ticket scanning and identifying system developed by the Shangtong science and technology has strong expansibility, combines practical application with the TH-OCR technology, and can even identify a form.
Foreign countries such as the united states, japan, russia, etc. have also achieved good results in the development of OCR technology, such as russian abbby software, the cenppapmi center of concorda university, canada, the fits laboratory of MIT, etc. The ABBYY technology has the average speed up to 40 percent for recognizing Asian characters and 25 percent for recognizing western letters, and supports the recognition of 179 languages, and is in the leading position in the world.
The neural network technology is an algorithm model which is similar to the behavior characteristics of an animal neural network and enables a computer to learn by 'observing' data. The technology achieves the aim of information processing by adjusting the connection relation between the nodes. Various Neural networks have been studied, such as Deep Neural Network (DNN), Convolutional Neural Network (CNN), Deep Belief Network (DBN), and the like. Can be applied to different fields: image processing, voice recognition, etc.
The convolutional neural network has good characteristics, can identify important features of the picture, and greatly improves the performance of machine vision. The convolutional neural network is mainly studied here, whereby the classical network is simply enumerated:
(1)LeNet5
LeNet5 is one of the earliest convolutional neural networks, and is a classic model suitable for handwriting recognition, and has a small volume and only 7 CNNs. The method carries out feature extraction through operations such as convolution, pooling and the like, simplifies the data calculation amount, and finally utilizes the full-connection layer to classify, thereby providing a framework for building a large-scale neural network of offspring.
(2)AlexNet
AlexNet was born in 2012, and based on the idea of LeNet, 5 convolutional layers and 3 full-connection layers were established, so that ReLu was successfully used as an activation function, and multiple GPUs were used for parallel training, thereby improving algorithm efficiency. A method for Dropout to randomly ignore neurons and data enhancement is also proposed to prevent model training from overfitting.
(3)GoogleNet
GoogleNet is a neural network studied by google, similar to AlexNet, but smaller in size and deeper in hierarchy. The method creatively uses the convolution kernel of 1x1 to carry out dimension lifting, and simultaneously uses the convolution kernels of a plurality of scales to carry out convolution, thereby reducing parameters and calculation amount again. In addition, a softmax function is added to prevent the problems of gradient disappearance and the like.
A, trellis, deep learning-based character recognition technology research [ J ] automation technology and application, 2018
The LENET-5 network is improved, and the identification of characters is completed after ten thousand samples of the MNIST database are trained for 80 times.
He Xilin, handwritten character recognition research based on deep learning and implementation [ D ]. Zhongshan university.2015
Aiming at the recognition problem of handwritten characters, convolutional neural networks with different depths are constructed by utilizing a deep learning technology to carry out recognition research on numbers, a network structure for recognition is finally obtained, and a character recognition system is realized based on the network.
The trained models adopted by A and B have no good pertinence, and the applicability to Siemens invoices is not high. The neural network is set up based on deep learning by the aid of training of the system aiming at Siemens invoices, symbol classification is added, and finally, a bill character recognition system is achieved by the aid of the model.
Disclosure of Invention
1. Objects of the invention
The invention provides a method and a system for identifying bill characters based on a convolutional neural network in order to realize the automatic acquisition of key fields and identification problems of bills.
2. The technical scheme adopted by the invention
The invention discloses a bill character recognition method based on a convolutional neural network, which comprises the following steps:
step 1, image preprocessing
Changing three channels into a single channel by introducing image graying by utilizing an OpenCV (open content library) library carried by Python for a trade list RGB (Red, Green and blue) image; adopting a binarization processing technology to change the gray value of each pixel point of the picture matrix into 0 or 255;
step 2, positioning the field and segmenting the character, determining the coordinate position of the key field of the bill and intercepting the coordinate position;
each form is provided with a fixed template, and if the position of a certain fixed identifier can be found on the picture, the coordinates of the key fields are obtained through relative positioning;
step 2.1, firstly cutting the image, calculating a relative factor r, calculating the coordinates of the upper left corner and the lower right corner of the template according to the relative factor after the minimum external rectangle maxLoc of the template is obtained, wherein the formula is as follows:
X0,Y0=(int(maxLoc[0]*r),int(maxLoc[1]*r));
X1,Y1=(int((maxLoc[0]+tW)*r),int((maxLoc[1]+tH)*r));
2.2, after the template coordinates are extracted, intercepting required fields according to the relative position; taking the key as the identifier of different key fields, the calculation formula is as follows:
ref=reff[a[1]+85:b[1]+100,a[0]+375:b[0]+535]
ref=reff[a[1]+655:b[1]+650,a[0]+583:b[0]+500]
ref=reff[a[1]+680:b[1]+675,a[0]+583:b[0]+500]
step 2.3, the character segmentation part adopts an opencv library function to obtain a list which comprises the coordinates (x, y) of the upper left corner of a single character, the width w and the height h, and therefore the coordinates of the single character are calculated and the interception is completed;
step 3, training by adopting convolutional neural network
When the model is used, after the picture is input, the model returns a label through prediction, and a corresponding classification symbol can be returned according to the label set during training; the picture is taken as a matrix, and an approximate value of the initial characteristic can be obtained after the processing of multilayer convolution and a nonlinear activation function;
the prediction model mainly stores pictures and corresponding labels, the sizes of the labels are adjusted according to different classification quantities, and currently N classifications are set, wherein the lists comprise N elements; when a 0 character picture is input, the first bit of the label is changed into 1, and the rest are 0; when a 1 character picture is input, the second bit of the label is changed into 1, and the rest are 0; and so on.
Furthermore, Siemens identification which is possessed by each bill is selected as a template to carry out template matching.
Furthermore, resize and binarization operations are carried out on the source data set, and the original smooth-edge data set image is modified into a pattern more suitable for invoice characters; on the other hand, the characters on the existing ticket are automatically intercepted to be used as a data set, and symbol classification is added.
Furthermore, semantic segmentation adopts a fuzzy rule-based method, and performs feature description on the topological structure of the character by using skeleton information of the character.
Furthermore, the semantic segmentation adopts the contour curve of the character, searches and divides points according to the contour curve, and is combined with a projection method.
Furthermore, semantic segmentation generates and selects different segmentation methods based on recognition, characters are cut to generate different cutting schemes, and finally, the cut segments are classified and recognized to select the segments with the highest reliability.
Further, step 3, the size of the label is adjusted according to the difference of the number of the classifications, and 19 classifications are currently set, where the label is [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0], and the list contains 19 elements.
The invention discloses a bill character recognition system based on a convolutional neural network, which comprises a memory and a processor, and the bill character recognition method is stored and interacted.
3. Advantageous effects adopted by the present invention
The method adopts the convolutional neural network, realizes a bill key field identification system aiming at Siemens trade invoices, effectively solves the problem of complex processing of the traditional paper bills, and can automatically extract and identify the key field of the bill.
Drawings
FIG. 1 is an overall structural view;
FIG. 2 is a diagram of key field location;
FIG. 3 is a schematic diagram of detecting pixel points by projection cutting;
FIG. 4 is a schematic diagram of a neural network architecture;
FIG. 5 is a Siemens identification template;
FIG. 6 is a schematic diagram of acquiring a reticle position;
FIG. 7 is the relative positioning;
FIG. 8 is a schematic view of data set classification;
FIG. 9 is a schematic diagram of symbol classification;
FIG. 10 is a schematic view of a predictive model;
FIG. 11 is an identification invoice number;
FIG. 12 is an identification of net weight;
FIG. 13 is a view for identifying gross weight;
fig. 14 identifies a single character.
Detailed Description
The technical solutions in the examples of the present invention are clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
The present invention will be described in further detail with reference to the accompanying drawings.
In order to be able to automatically acquire the key fields of the ticket,
the coordinate positions of these parts are first determined and truncated as in fig. 1-2, 1. The positioning part adopts a template matching mode to take template coordinates, and adopts a relative positioning mode to intercept key fields. The template matching method is to calculate the overall similarity between characters to realize character matching, and can be realized by adopting a simple template matching algorithm.
2. Character segmentation
As shown in fig. 3, character segmentation is a process of converting extracted image data into data that can be used for feature extraction, and the quality of the segmentation effect affects the subsequent recognition effect, which is an important part in the bill character recognition technology. Many methods of segmenting digit strings exist in the present year including:
(1) the fuzzy rule-based method based on the topological structure utilizes the skeleton information of the characters to describe the characteristics of the topological structure.
(2) Based on the contour features, the contour curve of the character is analyzed, and points are found and divided according to the contour curve, often in combination with a projection method.
(3) Based on identification, the method generates and selects different segmentation methods, performs and cuts characters to generate different cutting schemes, and finally performs classification identification on the cut segments to select the scheme with the highest reliability for application.
3. Training neural network model
Referring to fig. 4, the present invention trains a prediction model using a convolutional neural network. When image processing is performed in the field of computer vision, problems of excessive data volume are often encountered, for example, a three-channel picture with a size of 64 × 64 is 64 × 3 — 12288, and if the three-channel picture is processed by a higher pixel instrument such as a camera, the three-channel picture is larger, and the calculation by the computer is difficult. To solve this problem, the lenn LeCun et al computer scientist proposed LeNet5 network in 1998, which is the earliest convolutional neural network, and adds convolution and pooling operations to the structure of the conventional neural network, so as to simulate the principle of human brain perception and learn the characteristics of the image. When people perceive the outside world, the nerves transmit images to the brain, and feature information is concentrated through transmission of neurons in a layer, and finally concentrated feature value information is obtained and provided for the brain to understand.
The convolutional neural network is a hierarchical structure and comprises an input layer, a convolutional layer, a pooling layer, an output layer and the like, and the principle is that simply, the picture is regarded as a matrix, and after the processing of multilayer convolution and a nonlinear activation function, an approximate value of an initial characteristic can be obtained.
Example 1
1. Image pre-processing
Although the scanned trade list looks like a black and white picture, the picture is actually an RGB image, and includes information of three channels of red (R), green (G), and blue (B). In the process of processing such three-color channels, the operation amount is huge. Therefore, an image graying technology is introduced firstly by utilizing an OpenCV (open computer vision library) of a Python self-contained device, and three channels are changed into a single channel. Then, a binarization processing technique is adopted to change the gray value of each pixel point of the picture matrix into 0 (black) or 255 (white). Thus, the contrast of the picture is more obvious, and only black and white.
2. Field location and character segmentation
According to the analysis, each form has a fixed template, that is, the relative position of each part is fixed, and if the position of a certain fixed mark can be found on the picture, the coordinates of the key field can be obtained through relative positioning. Therefore, Siemens identification which is possessed by each bill is selected as a template to carry out template matching.
As shown in fig. 5-7, before template matching, an image is cut first, a relative factor r is calculated, after the minimum bounding rectangle maxLoc of the template is obtained, coordinates of the upper left corner and the lower right corner of the template are calculated according to the relative factor, and the formula is as follows:
X0,Y0=(int(maxLoc[0]*r),int(maxLoc[1]*r))
X1,Y1=(int((maxLoc[0]+tW)*r),int((maxLoc[1]+tH)*r))
after the template coordinates are extracted, the required fields can be intercepted according to the relative positions. Taking the key as the identifier of different key fields, the calculation formula is as follows:
ref=reff[a[1]+85:b[1]+100,a[0]+375:b[0]+535]
ref=reff[a[1]+655:b[1]+650,a[0]+583:b[0]+500]
ref=reff[a[1]+680:b[1]+675,a[0]+583:b[0]+500]
and the character segmentation part adopts an opencv library function to obtain a list which comprises the coordinates (x, y) of the upper left corner of a single character, the width w and the height h, and the coordinates of the single character are calculated and intercepted.
3. Building network model
(1) Collecting a data set
On one hand, the project modifies the starting data set on the Internet, performs resize and binarization operations, and modifies the original smooth-edge data set image into a pattern more suitable for invoice characters. And on the other hand, the existing invoice patterns are automatically intercepted, as shown in figures 8-9.
1.3 building network model
(1) Collecting a data set
On one hand, the project modifies the starting data set on the Internet, performs resize and binarization operations, and modifies the original smooth-edge data set image into a pattern more suitable for invoice characters. On the other hand, the characters on the existing invoice pattern ticket are automatically intercepted to be used as a data set, and in addition, two symbol classifications are automatically added.
(3) Training model
As shown in fig. 10, the prediction model mainly stores pictures and corresponding labels, the sizes of the labels are adjusted according to the number of the classifications, 19 classifications are currently set, and the labels are [0,0,0,0,0,0,0,0,0,0, 0] and include list of 19 elements. When a 0 character picture is input, the first bit of the label is changed into 1, and the rest are 0. And so on.
As shown in fig. 11-14, when using the model, the model returns a label after prediction after inputting the picture, and the corresponding classification symbol can be returned according to the label set during training.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A bill character recognition method based on a convolutional neural network is characterized in that:
step 1, image preprocessing
Changing three channels into a single channel by introducing image graying by utilizing an OpenCV (open content library) library carried by Python for a trade list RGB (Red, Green and blue) image; adopting a binarization processing technology to change the gray value of each pixel point of the picture matrix into 0 or 255;
step 2, positioning the field and segmenting the character, determining the coordinate position of the key field of the bill and intercepting the coordinate position;
each form is provided with a fixed template, and if the position of a certain fixed identifier can be found on the picture, the coordinates of the key fields are obtained through relative positioning;
step 2.1, firstly cutting the image, calculating a relative factor r, calculating the coordinates of the upper left corner and the lower right corner of the template according to the relative factor after the minimum external rectangle maxLoc of the template is obtained, wherein the formula is as follows:
X0,Y0=(int(maxLoc[0]*r),int(maxLoc[1]*r));
X1,Y1=(int((maxLoc[0]+tW)*r),int((maxLoc[1]+tH)*r));
2.2, after the template coordinates are extracted, intercepting required fields according to the relative position; taking the key as the identifier of different key fields, the calculation formula is as follows:
ref=reff[a[1]+85:b[1]+100,a[0]+375:b[0]+535]
ref=reff[a[1]+655:b[1]+650,a[0]+583:b[0]+500]
ref=reff[a[1]+680:b[1]+675,a[0]+583:b[0]+500]
step 2.3, the character segmentation part adopts an opencv library function to obtain a list which comprises the coordinates (x, y) of the upper left corner of a single character, the width w and the height h, and therefore the coordinates of the single character are calculated and the interception is completed;
step 3, training by adopting convolutional neural network
When the model is used, after the picture is input, the model returns a label through prediction, and a corresponding classification symbol can be returned according to the label set during training; the picture is taken as a matrix, and an approximate value of the initial characteristic can be obtained after the processing of multilayer convolution and a nonlinear activation function;
the prediction model mainly stores pictures and corresponding labels, the sizes of the labels are adjusted according to different classification quantities, and currently N classifications are set, wherein the lists comprise N elements; when a 0 character picture is input, the first bit of the label is changed into 1, and the rest are 0; when a 1 character picture is input, the second bit of the label is changed into 1, and the rest are 0; and so on.
2. The method for recognizing the bill characters based on the convolutional neural network as claimed in claim 1, wherein: therefore, Siemens identification which is possessed by each bill is selected as a template to carry out template matching.
3. The method for recognizing the bill characters based on the convolutional neural network as claimed in claim 2, wherein: carrying out resize and binarization operations on the source data set, and modifying the original smooth-edge data set image into a pattern more suitable for invoice characters; on the other hand, the characters on the existing ticket are automatically intercepted to be used as a data set, and symbol classification is added.
4. The method for recognizing the bill characters based on the convolutional neural network as claimed in claim 1, wherein: the semantic segmentation adopts a fuzzy rule-based method and utilizes skeleton information of characters to carry out feature description on the topological structure of the characters.
5. The method for recognizing the bill characters based on the convolutional neural network as claimed in claim 1, wherein: the semantic segmentation adopts a contour curve of characters, searches and divides points according to the contour curve, and is combined with a projection method.
6. The method for recognizing the bill characters based on the convolutional neural network as claimed in claim 1, wherein: the semantic segmentation generates and selects different segmentation methods based on recognition, performs and cuts characters to generate different cutting schemes, and finally performs classification recognition on the cut segments to select the segment with the highest reliability.
7. The method for recognizing the text of the bill based on the convolutional neural network as claimed in claim 1, wherein in step 3, the size of the label is adjusted according to the number of the classifications, 19 classifications are currently set, and the label is [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0] and contains a list of 19 elements.
8. A bill character recognition system based on a convolutional neural network is characterized in that: comprising a memory and a processor, storing and interacting with the method of text recognition of a ticket according to any one of claims 1 to 7.
CN202010780781.6A 2020-08-06 2020-08-06 Bill character recognition method and system based on convolutional neural network Withdrawn CN112069900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010780781.6A CN112069900A (en) 2020-08-06 2020-08-06 Bill character recognition method and system based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010780781.6A CN112069900A (en) 2020-08-06 2020-08-06 Bill character recognition method and system based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN112069900A true CN112069900A (en) 2020-12-11

Family

ID=73657192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010780781.6A Withdrawn CN112069900A (en) 2020-08-06 2020-08-06 Bill character recognition method and system based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112069900A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699861A (en) * 2021-03-24 2021-04-23 杭州学谷智能科技有限公司 Natural scene bill correction method based on neural network hotspot graph
CN112818835A (en) * 2021-01-29 2021-05-18 南京大学 Method for rapidly identifying and analyzing two-dimensional material by using machine learning method
CN112949471A (en) * 2021-02-27 2021-06-11 浪潮云信息技术股份公司 Domestic CPU-based electronic official document identification reproduction method and system
CN113326809A (en) * 2021-06-30 2021-08-31 重庆大学 Off-line signature identification method and system based on three-channel neural network
CN113449706A (en) * 2021-08-31 2021-09-28 四川野马科技有限公司 Bill document identification and archiving method and system based on artificial intelligence
CN113780121A (en) * 2021-08-30 2021-12-10 国网上海市电力公司 Power system operation instruction ticket automatic identification application method based on artificial intelligence
CN112651353B (en) * 2020-12-30 2024-04-16 南京红松信息技术有限公司 Target calculation positioning and identifying method based on custom label

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651353B (en) * 2020-12-30 2024-04-16 南京红松信息技术有限公司 Target calculation positioning and identifying method based on custom label
CN112818835A (en) * 2021-01-29 2021-05-18 南京大学 Method for rapidly identifying and analyzing two-dimensional material by using machine learning method
CN112949471A (en) * 2021-02-27 2021-06-11 浪潮云信息技术股份公司 Domestic CPU-based electronic official document identification reproduction method and system
CN112699861A (en) * 2021-03-24 2021-04-23 杭州学谷智能科技有限公司 Natural scene bill correction method based on neural network hotspot graph
CN113326809A (en) * 2021-06-30 2021-08-31 重庆大学 Off-line signature identification method and system based on three-channel neural network
CN113780121A (en) * 2021-08-30 2021-12-10 国网上海市电力公司 Power system operation instruction ticket automatic identification application method based on artificial intelligence
CN113449706A (en) * 2021-08-31 2021-09-28 四川野马科技有限公司 Bill document identification and archiving method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN109948510B (en) Document image instance segmentation method and device
CN112069900A (en) Bill character recognition method and system based on convolutional neural network
Khan et al. Urdu optical character recognition systems: Present contributions and future directions
Balci et al. Handwritten text recognition using deep learning
CN111652332B (en) Deep learning handwritten Chinese character recognition method and system based on two classifications
CN113537227B (en) Structured text recognition method and system
CN111523622B (en) Method for simulating handwriting by mechanical arm based on characteristic image self-learning
CN110674777A (en) Optical character recognition method in patent text scene
Akhand et al. Convolutional Neural Network based Handwritten Bengali and Bengali-English Mixed Numeral Recognition.
CN111666937A (en) Method and system for recognizing text in image
Rao et al. Exploring deep learning techniques for kannada handwritten character recognition: A boon for digitization
Sethy et al. Off-line Odia handwritten numeral recognition using neural network: a comparative analysis
Suresh et al. Telugu Optical Character Recognition Using Deep Learning
CN113673528B (en) Text processing method, text processing device, electronic equipment and readable storage medium
Dipu et al. Bangla optical character recognition (ocr) using deep learning based image classification algorithms
Lin et al. Radical-based extract and recognition networks for Oracle character recognition
Singh et al. A comprehensive survey on Bangla handwritten numeral recognition
Joshi et al. Combination of multiple image features along with KNN classifier for classification of Marathi Barakhadi
Nath et al. Improving various offline techniques used for handwritten character recognition: a review
Nguyen-Trong An End-to-End Method to Extract Information from Vietnamese ID Card Images
Krithiga et al. Ancient character recognition: a comprehensive review
Hijam et al. Convolutional neural network based Meitei Mayek handwritten character recognition
Cui et al. Chinese calligraphy recognition system based on convolutional neural network
Manzoor et al. A Novel System for Multi-Linguistic Text Identification and Recognition in Natural Scenes using Deep Learning
Kaur et al. Urdu ligature recognition techniques-A review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201211