CN113657162A - Bill OCR recognition method based on deep learning - Google Patents
Bill OCR recognition method based on deep learning Download PDFInfo
- Publication number
- CN113657162A CN113657162A CN202110799703.5A CN202110799703A CN113657162A CN 113657162 A CN113657162 A CN 113657162A CN 202110799703 A CN202110799703 A CN 202110799703A CN 113657162 A CN113657162 A CN 113657162A
- Authority
- CN
- China
- Prior art keywords
- bill
- character
- model
- classification model
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013135 deep learning Methods 0.000 title claims abstract description 28
- 238000013145 classification model Methods 0.000 claims abstract description 67
- 238000001514 detection method Methods 0.000 claims abstract description 46
- 238000012549 training Methods 0.000 claims abstract description 39
- 238000002372 labelling Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 18
- 238000012795 verification Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000012015 optical character recognition Methods 0.000 abstract description 22
- 238000005516 engineering process Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 2
- 238000005452 bending Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a bill OCR recognition method based on deep learning, which belongs to the technical field of optical character recognition and comprises the following steps: step S10, collecting a large number of bill images of different types; step S20, labeling and expanding each bill image to obtain a training data set; step S30, creating a bill classification model, a character detection model, a character direction classification model and a character recognition model based on deep learning; step S40, training a bill classification model, a character detection model, a character direction classification model and a character recognition model respectively by using the training data set; and S50, intelligently recognizing the bill to be recognized by utilizing the trained bill classification model, the trained character detection model, the trained character direction classification model and the trained character recognition model. The invention has the advantages that: the precision and the efficiency of bill character recognition are greatly improved.
Description
Technical Field
The invention relates to the technical field of optical character recognition, in particular to a bill OCR recognition method based on deep learning.
Background
Traditionally, the method of manual visual and input is mainly adopted for inputting characters into electronic equipment, but the method cannot meet the requirement of massive character input, and not only is the efficiency low, but also the labor cost is high. With the development and popularization of informatization, an OCR (Optical Character Recognition) technology has come about, which refers to a process of inspecting characters printed on paper by an electronic device (such as a scanner or a digital camera), determining the shape of the characters by detecting differences in texture features of different characters, and then translating the shape into computer characters by a Character Recognition method, and mainly includes an OCR detection technology and an OCR Recognition technology.
The OCR detection technology mainly utilizes the difference between characters in an image and a background area to determine the positions of the characters or text lines in the image. The OCR detection algorithm based on the characters is to detect the regions of the single characters first, and then merge the regions of the single characters into the text line regions according to the position relationship between the characters, for example, the classical CTPN detection algorithm, but because the intervals between many characters are small, it is difficult to accurately detect the boundaries of the characters, and the characters need to be combined according to the context, and the flow is complex. The OCR detection algorithm based on the text line mainly identifies the whole line of text area through a target detection algorithm or a semantic segmentation algorithm.
The OCR technology is to identify character information or text line information in different image blocks by using different methods, and obtain the most similar characters mainly by comparing gray differences between characters in different templates and an image to be identified through template matching.
Because the traditional OCR technology has the defects of low recognition precision, complex operation flow and low detection speed, the method cannot be applied to the recognition of bills with various types and large quantity. Therefore, how to provide a bill OCR recognition method based on deep learning to improve the precision and efficiency of bill character recognition becomes a problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a bill OCR recognition method based on deep learning, so that the precision and the efficiency of bill character recognition are improved.
The invention is realized by the following steps: a bill OCR recognition method based on deep learning comprises the following steps:
step S10, collecting a large number of bill images of different types;
step S20, labeling and expanding each bill image to obtain a training data set;
step S30, creating a bill classification model, a plurality of character detection models, a character direction classification model and a character recognition model based on deep learning;
step S40, training a bill classification model, a character detection model, a character direction classification model and a character recognition model respectively by using the training data set;
and S50, intelligently recognizing the bill to be recognized by utilizing the trained bill classification model, the trained character detection model, the trained character direction classification model and the trained character recognition model.
Further, the step S10 is specifically:
under the environment of different illumination intensity and background color, a large number of different types of bill images are collected.
Further, the step S20 is specifically:
and marking the bill type, the character area, the character direction and the character content of each collected bill image through an image detection marking tool, and carrying out sample expansion operation of random deflection, translation or scaling on each marked bill image to obtain a training data set.
Further, in the step S30, the bill classification model is configured to classify bills, and use a VGG16 network as a main feature extraction network and a cross entropy function as a loss function;
the character detection model is used for detecting and intercepting a text region, is a DBNet network, adopts a ResNet network as a main feature extraction network, and adopts a variable convolution to extract features, so that the size of a convolution product field changes along with the features;
the character direction classification model is used for identifying the arrangement direction of characters, an RCNN (Richardson neural network) is used as a trunk feature extraction network, and a binary cross entropy function is used as a loss function;
the character recognition model is used for recognizing characters, bottom layer image features are extracted through a CNN network, context representations of text lines are extracted through the RNN network, and a CTC function is used as a loss function.
Further, the step S40 specifically includes:
step S41, dividing the training data set into a training set and a verification set according to a preset proportion;
step S42, respectively training a bill classification model, a character detection model, a character direction classification model and a character recognition model for preset times by utilizing the training set;
step S43, the bill classification model, the character detection model, the character direction classification model and the character recognition model are verified respectively by utilizing the verification set, and the step S50 is carried out if the verification is passed; if the verification is not passed, the training data set is expanded, and the process proceeds to step S41.
Further, the step S50 specifically includes:
step S51, respectively creating a bill template based on each bill type;
s52, classifying the bill to be recognized by using the trained bill classification model, and inputting a corresponding character detection model;
step S53, the text detection model identifies and intercepts the text area in the bill to be identified to obtain a text picture and inputs a character direction classification model;
step S54, the character direction classification model identifies the character arrangement direction of a text picture, corrects the text picture to enable characters to be horizontally arranged, and inputs the corrected text picture into a character identification model;
step S55, the character recognition model recognizes characters in the text picture and automatically fills the recognized characters into corresponding positions corresponding to the bill template;
and S56, storing and displaying the bill template filled with the characters, and completing intelligent recognition of the bill to be recognized.
Further, in step S51, the ticket template at least includes a field name, a reference field name, and a field value filling position.
The invention has the advantages that:
1. acquiring a large number of bill images of different types, labeling and expanding to obtain a training data set, training a bill classification model, a character detection model, a character direction classification model and a character recognition model which are created based on deep learning by using the training data set, and finally carrying out bill classification, text region recognition and interception, character arrangement direction correction and character recognition by using the trained bill classification model, character detection model, character direction classification model and character recognition model in sequence to finish intelligent recognition of bills, because the sample size is expanded to train each model, the bill identification process is classified and corrected, and the model established based on deep learning can extract high-level semantic features, and the influence caused by unclear shooting and the like is overcome by combining the context semantic information of the text line, and finally, the precision of bill character recognition is greatly improved.
2. The bill to be recognized can be automatically recognized through the bill classification model, the character detection model, the character direction classification model and the character recognition model, the corresponding bill template is automatically filled with the recognition content, and the bill information is recorded and recognized for manual recognition, so that the bill character recognition efficiency is greatly improved.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
FIG. 1 is a flow chart of a bill OCR recognition method based on deep learning.
FIG. 2 is a schematic block diagram of the circuit of a deep learning based bill OCR recognition system of the present invention.
Description of the labeling:
100-a bill OCR recognition system based on deep learning, 1-camera, 2-industrial personal computer, 3-display.
Detailed Description
The technical scheme in the embodiment of the application has the following general idea: the method comprises the steps of acquiring a large number of bill images of different types, labeling and expanding to obtain a training data set, training a bill classification model, a character detection model, a character direction classification model and a character recognition model which are created based on deep learning by utilizing the training data set, and finally, sequentially utilizing the trained bill classification model, character detection model, character direction classification model and character recognition model to classify bills, recognize and intercept text regions, correct character arrangement directions and recognize characters to finish intelligent recognition of bills, and automatically filling recognition contents into corresponding bill templates to improve the precision and efficiency of bill character recognition.
Referring to fig. 1 to 2, the present invention uses a bill OCR recognition system 100 based on deep learning, which includes a camera 1, an industrial personal computer 2, a display 3, a workbench (not shown) and a support (not shown); one end of the industrial personal computer 2 is connected with the camera 1, and the other end of the industrial personal computer is connected with the display 3; the camera 1 is erected on the workbench through a bracket.
The camera 1 is used for shooting a bill to be recognized and transmitting the bill to the industrial personal computer 2, and the range of the variable resolution ratio is wide: 1000-2000, high: 1000 to 8000; the industrial personal computer 2 is used for identifying the bill to be identified; the display 3 is used for displaying the identification result of the industrial personal computer 2; the workbench is used for placing tickets to be identified so as to facilitate shooting of the camera 1.
The invention discloses a better embodiment of a bill OCR recognition method based on deep learning, which comprises the following steps:
step S10, collecting a large number of bill images of different types;
step S20, labeling and expanding each bill image to obtain a training data set;
step S30, creating a bill classification model, a plurality of character detection models, a character direction classification model and a character recognition model based on deep learning;
step S40, training a bill classification model, a character detection model, a character direction classification model and a character recognition model respectively by using the training data set;
and S50, intelligently recognizing the bill to be recognized by utilizing the trained bill classification model, the trained character detection model, the trained character direction classification model and the trained character recognition model.
By the identification method, even if the bill to be identified has the conditions of bending, deformation, deflection and the like, the bill can be well identified.
The step S10 specifically includes:
under the environment of different illumination intensity and background color, gather a large amount of different types of bill image to promote the variety of bill image. During specific implementation, cameras with different specifications can be used for collecting bill images.
The step S20 specifically includes:
and marking the bill type, the character area, the character direction and the character content of each collected bill image through an image detection marking tool, and carrying out sample expansion operation of random deflection, translation or scaling on each marked bill image to obtain a training data set. In specific implementation, text lines with different formats and sizes can be added into background patterns of different bill images to expand the sample size. The bill types comprise value-added tax invoices, train tickets, bus tickets and the like. The image detection annotation tool is preferably labelme.
In the step S30, the bill classification model is used to classify bills, and uses a VGG16 network as a main feature extraction network and a cross entropy function as a loss function;
the character detection model is used for detecting and intercepting text regions, is a DBNet network (micro-threshold network), adopts a ResNet network as a main feature extraction network, adopts a variable convolution to extract features, enables the size of a convolution product and the size of a view field to change along with the features, and is suitable for detecting text regions with different shapes such as distortion and deformation;
the character direction classification model is used for identifying the arrangement direction of characters, an RCNN (Richardson neural network) is used as a trunk feature extraction network, and a binary cross entropy function is used as a loss function;
the character recognition model is used for recognizing characters, bottom layer image features are extracted through a CNN network, context representations of text lines are extracted through an RNN network, and a CTC function is used as a loss function, namely the character recognition model adopts an RCNN-CTC network.
The step S40 specifically includes:
step S41, dividing the training data set into a training set and a verification set according to a preset proportion;
step S42, respectively training a bill classification model, a character detection model, a character direction classification model and a character recognition model for preset times by utilizing the training set;
step S43, the bill classification model, the character detection model, the character direction classification model and the character recognition model are verified respectively by utilizing the verification set, and the step S50 is carried out if the verification is passed; if the verification is not passed, the training data set is expanded, and the process proceeds to step S41.
The step S50 specifically includes:
step S51, respectively creating a bill template based on each bill type;
s52, classifying the bill to be recognized by using the trained bill classification model, and inputting a corresponding character detection model;
step S53, the text detection model identifies and intercepts the text area in the bill to be identified to obtain a text picture and inputs a character direction classification model;
step S54, the character direction classification model identifies the character arrangement direction of a text picture, corrects the text picture to enable characters to be horizontally arranged, and inputs the corrected text picture into a character identification model; judging whether the picture of the text has the conditions of inversion, turnover, inclination and the like, and if so, correcting;
step S55, the character recognition model recognizes characters in the text picture and automatically fills the recognized characters into corresponding positions corresponding to the bill template;
and S56, storing and displaying the bill template filled with the characters, and completing intelligent recognition of the bill to be recognized.
In step S51, the ticket template at least includes a field name, a reference field name, and a field value filling position; since the field name is artificially specified and does not exist in the OCR recognition result, the matching of the field value filling position cannot be performed, and thus the reference field name is set for matching.
In summary, the invention has the advantages that:
1. acquiring a large number of bill images of different types, labeling and expanding to obtain a training data set, training a bill classification model, a character detection model, a character direction classification model and a character recognition model which are created based on deep learning by using the training data set, and finally carrying out bill classification, text region recognition and interception, character arrangement direction correction and character recognition by using the trained bill classification model, character detection model, character direction classification model and character recognition model in sequence to finish intelligent recognition of bills, because the sample size is expanded to train each model, the bill identification process is classified and corrected, and the model established based on deep learning can extract high-level semantic features, and the influence caused by unclear shooting and the like is overcome by combining the context semantic information of the text line, and finally, the precision of bill character recognition is greatly improved.
2. The bill to be recognized can be automatically recognized through the bill classification model, the character detection model, the character direction classification model and the character recognition model, the corresponding bill template is automatically filled with the recognition content, and the bill information is recorded and recognized for manual recognition, so that the bill character recognition efficiency is greatly improved.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.
Claims (7)
1. A bill OCR recognition method based on deep learning is characterized in that: the method comprises the following steps:
step S10, collecting a large number of bill images of different types;
step S20, labeling and expanding each bill image to obtain a training data set;
step S30, creating a bill classification model, a plurality of character detection models, a character direction classification model and a character recognition model based on deep learning;
step S40, training a bill classification model, a character detection model, a character direction classification model and a character recognition model respectively by using the training data set;
and S50, intelligently recognizing the bill to be recognized by utilizing the trained bill classification model, the trained character detection model, the trained character direction classification model and the trained character recognition model.
2. A deep learning based document OCR recognition method as claimed in claim 1, wherein: the step S10 specifically includes:
under the environment of different illumination intensity and background color, a large number of different types of bill images are collected.
3. A deep learning based document OCR recognition method as claimed in claim 1, wherein: the step S20 specifically includes:
and marking the bill type, the character area, the character direction and the character content of each collected bill image through an image detection marking tool, and carrying out sample expansion operation of random deflection, translation or scaling on each marked bill image to obtain a training data set.
4. A deep learning based document OCR recognition method as claimed in claim 1, wherein: in the step S30, the bill classification model is used to classify bills, and uses a VGG16 network as a main feature extraction network and a cross entropy function as a loss function;
the character detection model is used for detecting and intercepting a text region, is a DBNet network, adopts a ResNet network as a main feature extraction network, and adopts a variable convolution to extract features, so that the size of a convolution product field changes along with the features;
the character direction classification model is used for identifying the arrangement direction of characters, an RCNN (Richardson neural network) is used as a trunk feature extraction network, and a binary cross entropy function is used as a loss function;
the character recognition model is used for recognizing characters, bottom layer image features are extracted through a CNN network, context representations of text lines are extracted through the RNN network, and a CTC function is used as a loss function.
5. A deep learning based document OCR recognition method as claimed in claim 1, wherein: the step S40 specifically includes:
step S41, dividing the training data set into a training set and a verification set according to a preset proportion;
step S42, respectively training a bill classification model, a character detection model, a character direction classification model and a character recognition model for preset times by utilizing the training set;
step S43, the bill classification model, the character detection model, the character direction classification model and the character recognition model are verified respectively by utilizing the verification set, and the step S50 is carried out if the verification is passed; if the verification is not passed, the training data set is expanded, and the process proceeds to step S41.
6. A deep learning based document OCR recognition method as claimed in claim 1, wherein: the step S50 specifically includes:
step S51, respectively creating a bill template based on each bill type;
s52, classifying the bill to be recognized by using the trained bill classification model, and inputting a corresponding character detection model;
step S53, the text detection model identifies and intercepts the text area in the bill to be identified to obtain a text picture and inputs a character direction classification model;
step S54, the character direction classification model identifies the character arrangement direction of a text picture, corrects the text picture to enable characters to be horizontally arranged, and inputs the corrected text picture into a character identification model;
step S55, the character recognition model recognizes characters in the text picture and automatically fills the recognized characters into corresponding positions corresponding to the bill template;
and S56, storing and displaying the bill template filled with the characters, and completing intelligent recognition of the bill to be recognized.
7. The deep learning based bill OCR recognition method as claimed in claim 6, wherein: in step S51, the ticket template at least includes a field name, a reference field name, and a field value filling position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110799703.5A CN113657162A (en) | 2021-07-15 | 2021-07-15 | Bill OCR recognition method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110799703.5A CN113657162A (en) | 2021-07-15 | 2021-07-15 | Bill OCR recognition method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113657162A true CN113657162A (en) | 2021-11-16 |
Family
ID=78477398
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110799703.5A Pending CN113657162A (en) | 2021-07-15 | 2021-07-15 | Bill OCR recognition method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113657162A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116151532A (en) * | 2022-09-29 | 2023-05-23 | 河北数微信息技术有限公司 | Self-service government service handling method and device, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977957A (en) * | 2019-03-04 | 2019-07-05 | 苏宁易购集团股份有限公司 | A kind of invoice recognition methods and system based on deep learning |
CN111178345A (en) * | 2019-05-20 | 2020-05-19 | 京东方科技集团股份有限公司 | Bill analysis method, bill analysis device, computer equipment and medium |
CN112115934A (en) * | 2020-09-16 | 2020-12-22 | 四川长虹电器股份有限公司 | Bill image text detection method based on deep learning example segmentation |
WO2021126229A1 (en) * | 2019-12-20 | 2021-06-24 | Jumio Corporation | Machine learning for data extraction |
CN113033543A (en) * | 2021-04-27 | 2021-06-25 | 中国平安人寿保险股份有限公司 | Curved text recognition method, device, equipment and medium |
-
2021
- 2021-07-15 CN CN202110799703.5A patent/CN113657162A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977957A (en) * | 2019-03-04 | 2019-07-05 | 苏宁易购集团股份有限公司 | A kind of invoice recognition methods and system based on deep learning |
CN111178345A (en) * | 2019-05-20 | 2020-05-19 | 京东方科技集团股份有限公司 | Bill analysis method, bill analysis device, computer equipment and medium |
WO2021126229A1 (en) * | 2019-12-20 | 2021-06-24 | Jumio Corporation | Machine learning for data extraction |
CN112115934A (en) * | 2020-09-16 | 2020-12-22 | 四川长虹电器股份有限公司 | Bill image text detection method based on deep learning example segmentation |
CN113033543A (en) * | 2021-04-27 | 2021-06-25 | 中国平安人寿保险股份有限公司 | Curved text recognition method, device, equipment and medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116151532A (en) * | 2022-09-29 | 2023-05-23 | 河北数微信息技术有限公司 | Self-service government service handling method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325203B (en) | American license plate recognition method and system based on image correction | |
CN107798321B (en) | Test paper analysis method and computing device | |
JP4928310B2 (en) | License plate recognition device, control method thereof, computer program | |
CN105046196B (en) | Front truck information of vehicles structuring output method based on concatenated convolutional neutral net | |
Gebhardt et al. | Document authentication using printing technique features and unsupervised anomaly detection | |
CN110807454B (en) | Text positioning method, device, equipment and storage medium based on image segmentation | |
CN111626292B (en) | Text recognition method of building indication mark based on deep learning technology | |
CN110598566A (en) | Image processing method, device, terminal and computer readable storage medium | |
CN114092938B (en) | Image recognition processing method and device, electronic equipment and storage medium | |
CN111414905B (en) | Text detection method, text detection device, electronic equipment and storage medium | |
CN113901952A (en) | Print form and handwritten form separated character recognition method based on deep learning | |
CN113158895A (en) | Bill identification method and device, electronic equipment and storage medium | |
CN111783541A (en) | Text recognition method and device | |
CN115810197A (en) | Multi-mode electric power form recognition method and device | |
CN109508714B (en) | Low-cost multi-channel real-time digital instrument panel visual identification method and system | |
CN113657162A (en) | Bill OCR recognition method based on deep learning | |
CN114463770A (en) | Intelligent question-cutting method for general test paper questions | |
KR102562170B1 (en) | Method for providing deep learning based paper book digitizing service | |
CN111832497B (en) | Text detection post-processing method based on geometric features | |
CN113569677A (en) | Paper test report generation method based on scanning piece | |
CN110766001B (en) | Bank card number positioning and end-to-end identification method based on CNN and RNN | |
CN108052936B (en) | Automatic inclination correction method and system for Braille image | |
CN112861861B (en) | Method and device for recognizing nixie tube text and electronic equipment | |
CN108133205B (en) | Method and device for copying text content in image | |
CN108062548B (en) | Braille square self-adaptive positioning method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |