CN112347994A - Invoice image target detection and angle detection method based on deep learning - Google Patents
Invoice image target detection and angle detection method based on deep learning Download PDFInfo
- Publication number
- CN112347994A CN112347994A CN202011379828.4A CN202011379828A CN112347994A CN 112347994 A CN112347994 A CN 112347994A CN 202011379828 A CN202011379828 A CN 202011379828A CN 112347994 A CN112347994 A CN 112347994A
- Authority
- CN
- China
- Prior art keywords
- invoice
- training
- model
- data
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/413—Classification of content, e.g. text, photographs or tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, and discloses an invoice image target detection and angle detection method based on deep learning, which is used for solving the problems of complicated processes and low efficiency of early-stage invoice segmentation, classification and angle detection in an invoice identification task. According to the method, invoice actual labeling data are adopted for model training, and segmentation, classification and angle detection of the invoices are completed by using a trained single model; wherein, the model training comprises: constructing a depth network model and setting model parameters; loading training data and verification data; taking a batch-size invoice picture from the training set, training the model, and updating the model algorithm by using the SGD; and after the training is finished, verifying the precision of the model by using the verification set, and storing the trained model. The invention is suitable for invoice image recognition.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an invoice image target detection and angle detection method based on deep learning.
Background
In recent years, the development of AI technology has been rapidly advanced, and its application fields are also increasingly wider, such as robot, speech recognition, image recognition, computer vision, automatic driving, and the like. In image recognition, deep learning-based OCR recognition is widely adopted in the industry because of its advantages such as high recognition accuracy and high recognition speed. With the development of technologies such as big data and cloud computing, in the field of financial reimbursement, the OCR technology also becomes a favorable tool for extracting various bill contents to perform electronic management and big data mining. As is known, when the financial reimbursement is carried out, a to-be-reimbursed bill is generally pasted on A4 paper, the traditional method is that a professional financial staff carries out A4 scanning and recording, and then the content of the bill is manually recorded, and the processing mode is tedious, time-consuming and labor-consuming, and the accuracy of data is limited. To free the primary financial staff, OCR technology was introduced into the field of financial reimbursement. For this scenario, a general flow is: 1) performing invoice segmentation on the A4 paper ticket data; 2) carrying out image classification on the divided invoices; 3) carrying out image rotation correction on the segmented images of each category; 4) carrying out character detection on the corrected image; 5) performing OCR recognition on the detected text image; 6) and generating the structured data according to the recognition result. The whole process is still relatively complicated, in order to simplify the process, the invention optimizes the image preprocessing stage, fuses the first three steps into a network by adopting a deep learning method, carries out model training on actual labeled data, and carries out invoice segmentation, classification and angle detection by utilizing a trained single model. The overall process of invoice identification in the field of financial reimbursement is greatly simplified, the three models are fused into one model, the difficulty of model deployment and maintenance is also simplified, the calculation power is saved, and the pretreatment efficiency of bill identification is improved.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method is used for solving the problems of complicated processes and low efficiency of early-stage invoice segmentation, classification and angle detection in an invoice identification task.
In order to solve the problems, the invention adopts the technical scheme that: performing model training by adopting invoice actual labeling data, and completing invoice segmentation, classification and angle detection by using a trained single model; wherein, the data that the invoice actually annotated include: the coordinates of the rectangular frame where the invoice area is located on the invoice image, the invoice category and the rotation angle of the invoice sub-area.
Further, the model training in the invention comprises the following steps:
s1: constructing a deep network model, and setting model parameters, wherein the model parameters comprise network parameters and training parameters;
s2: loading training data and verification data;
s3: judging whether the current Epoch is less than the total round number Epoch, if so, turning to the step S4, otherwise, ending the training;
s4: taking a batch-size invoice picture from the training set, training the model, and updating the model algorithm by using the SGD;
s5: judging whether all pictures in the training set are trained, if so, turning to the step S6, otherwise, turning to the step S4;
s6: and verifying the accuracy of the model by using the verification set, storing the trained model, and turning to the step S3.
Further, before model training, processing of data actually labeled for invoices comprises training data label format conversion and division; and when the training data label format is converted, converting the labeling data into a coco data format, and adding an angle field for each invoice target in the finally converted coco labeling file.
Further, the invention adopts the following strategies to divide data: classifying the training data according to the invoice types in the labeled invoice images, counting the number of invoices of each type in the invoice images if the invoice images have invoices of a plurality of types, adding the invoice images into corresponding type bill sets if the number of the invoice images of each type is the same, and adding the invoice images into the bill types with a larger number if the number of the invoice images of each type is not the same.
Furthermore, the deep network model is constructed based on the fast-rcnn, a branch is additionally arranged on the ROIHead part of the fast-rcnn during construction, and the branch, a coordinate frame regression branch and an invoice classification branch are leveled for directly regressing and predicting the rotation angle of the target area;
when setting network parameters, calculating a pixel mean value mean and a standard deviation std of training image data of an RPNHead module in the family-rcnn, and regularizing the training data by using obtained image pixel statistical information; and modifying the classification category number of the box _ head and mask _ head modules in the master-rcnn, and setting the classification category number as the actual labeled invoice category number.
The invention has the following beneficial effects: the invention optimizes the image preprocessing stage, integrates the segmentation, classification and angle detection into a network by adopting a deep learning method, carries out model training on the actual labeled data, and carries out the segmentation, classification and angle detection of the invoice by utilizing a trained single model. The overall process of invoice identification in the field of financial reimbursement is greatly simplified, the three models are fused into one model, the difficulty of model deployment and maintenance is also simplified, the calculation power is saved, and the pretreatment efficiency of bill identification is improved.
Drawings
FIG. 1 is a fast-rcnn based improved invoice target detection and angle detection network structure;
FIG. 2 is a network model training process.
Detailed Description
Aiming at the problems of complicated traditional invoice identification preprocessing flow and low development and maintenance efficiency, the invention discloses an invoice image target detection and angle detection method based on deep learning. And the processing efficiency of the whole bill identification process is improved by adopting a deep learning technical means. In order to achieve the purpose, the invention provides a specific technical scheme as follows:
step 1: training data manual labeling
Step1.1 annotates data preparation. The A4 scanned pictures of the actual reimbursement and pasting invoice are collected, the specific number depends on the types of the bills, and in principle, the A4 scanned images have no less than 200 marked boxes per type of the bills to define the category names of the bills in various categories.
The Step1.2 training data were labeled. The specific labeling contents are as follows: 1) coordinates of a rectangular frame where the invoice area is located on A4; 2) an invoice category; 3) and (3) calculating the rotation angle of the invoice subregion, wherein the specific angle value is calculated through coordinates of two points on the marked image, if a straight line exists in the target bill image, the two points on the straight line are marked generally, if no straight line exists in the target invoice image, two points in the straight line where the underline of the long text line in the image are marked, and the two marked points are marked as a starting point and an end point respectively. The annotation tool used VIA.
Step1.3 calculates the rotation angle of the target invoice in the annotation image.
Taking a rectangular frame list and a marking point list in a marking image, and establishing a corresponding relation between a marking frame and a marking point, wherein in principle, one invoice on A4 paper corresponds to one rectangular frame and two marking points to calculate the rotation angle of the invoice, the coordinates of a marking starting point are x1 and y1, and the coordinates of a marking ending point are x2 and y 2. After the basic data is taken, the rotation angle of the invoice is calculated according to the following rule.
Step1.4 training data label format conversion and partitioning.
Step1.4.1 converts the annotation data into a coco data format, and adds an angle field angle for each invoice target in the finally converted coco annotation file, wherein the value of the angle field is the calculation result of Step 1.3.
Step1.4.2 because the invoice type and the invoice quantity pasted in one piece of A4 paper are uncertain, in order to ensure the generalization performance of the model to different invoices, a simple random strategy cannot be adopted for data division. The following strategy is adopted for data partitioning here: classifying the training data according to the types of invoices in the labeled A4 paper images, counting the number of invoices of each type in the A4 paper images if the A4 paper images have a plurality of types of invoices, adding the A4 paper images into corresponding type bill sets if the number of the invoice images of each type is the same, and otherwise adding the A4 paper images into the bill types with a larger number.
Step1.4.3 counts the number of all annotated A4 paper images, and divides the total data set into training set and test set according to the ratio of 4: 1. Wherein the a4 paper images of various types of bills are evenly distributed. For example, if 2000A 4 paper images are labeled in total and the invoice types are 10 types in total, the number of each type of bill in the training set and the verification set is as close to 160:40 as possible, and the obtained training set and the verification set are respectively marked as tranDataset and validateDataset.
Step2 network structure and training parameter setting
Step2.1 network architecture
The network structure of the invention is improved based on the faster-rcnn network, the basic feature extraction backbone adopts Resnet101, an RPN module is not changed, a branch is additionally arranged on the ROIHead part, the branch, a coordinate frame regression branch and an invoice classification branch are leveled and are used for directly regressing and predicting the rotating angle of a target area, a loss function corresponding to the branch adopts mean square error loss, normalization processing is carried out on real angle data and predicted angle data during training, the whole loss function is combined faster-rcnn network loss and angle regression branch loss, and the detailed structure is shown in an attached figure 1.
Step2.2 network parameter settings
In the experiment, corresponding network parameters need to be modified correspondingly, and the main modification points are as follows: 1) and calculating the pixel mean value mean and the standard deviation std of the training image data of the RPNHead module, and regularizing the training data by using the obtained image pixel statistical information. 2) And modifying the classification category number of the box _ head module and the mask _ head module. And setting the number as the number of the actually marked invoice types.
Step2.3 training parameter settings
An SGD optimizer with momentum of 0.9 is adopted in the experiment, the single GPU learning rate is set to be 0.001, and the weight attenuation coefficient weight is 0.0001. The single GPU (16GB) batch-size is 8, and the number of training rounds is Epoch 50.
Step 3: model training
The network structure constructed in the description of fig. 1 was trained using the trainDataset and validated using the validateDataset. The specific training mode is the mode shown in the attached figure 2 of the specification, and the training steps are as follows:
step3.1: and constructing a network model and setting model parameters. Setting model initialization parameters according to Step2.2-Step2.3;
step3.2: loading training data and verification data;
step3.3: judging whether the current Epoch is less than the total round number Epoch, if so, turning to step Step3.4, and if not, finishing the training;
step3.4: taking a batch-size invoice picture from the training set, training the model, and updating the model algorithm by using the SGD;
step3.5: judging whether all the pictures in the training set are trained, if so, turning to step Step3.6, otherwise, turning to step Step3.4;
step3.6: verifying the precision of the model by using the verification set, storing the trained model, and turning to Step3.3;
step 4: model deployment and prediction
And performing invoice target detection and invoice angle detection on the A4 paper image by using the trained model to obtain position information, category information and angle information of each invoice. And cutting an invoice area from the A4 paper image, and performing rotation correction on the invoice by using the detected invoice rotation angle information to obtain a corrected invoice image. And finally, sending the corrected invoice image and invoice category information to an invoice identification module to obtain the structured identification data of each category invoice.
Claims (5)
1. A method for detecting invoice image targets and detecting angles based on deep learning is characterized in that actual invoice labeling data is adopted for model training, and segmentation, classification and angle detection of invoices are completed by using a trained single model; wherein, the data of the invoice actual annotation include: the coordinates of the rectangular frame where the invoice area is located on the invoice image, the invoice category and the rotation angle of the invoice sub-area.
2. The invoice image target detection and angle detection method based on deep learning as claimed in claim 1, characterized in that the model training comprises the following steps:
s1: constructing a deep network model, and setting model parameters, wherein the model parameters comprise network parameters and training parameters;
s2: loading training data and verification data;
s3: judging whether the current Epoch is less than the total round number Epoch, if so, turning to the step S4, otherwise, ending the training;
s4: taking a batch-size invoice picture from the training set, training the model, and updating the model algorithm by using the SGD;
s5: judging whether all pictures in the training set are trained, if so, turning to the step S6, otherwise, turning to the step S4;
s6: and verifying the accuracy of the model by using the verification set, storing the trained model, and turning to the step S3.
3. The invoice image target detection and angle detection method based on deep learning as claimed in claim 2, characterized in that before model training, the processing of data for invoice actual labeling comprises training data label format conversion and division; and when the training data label format is converted, converting the labeling data into a coco data format, and adding an angle field for each invoice target in the finally converted coco labeling file.
4. The deep learning-based invoice image target detection and angle detection method as claimed in claim 3, wherein during the division of the training data, the training data is classified according to the invoice types in the labeled invoice image, if the invoice image has multiple types of invoices, the number of the invoices of each type in the invoice image is counted, if the number of the invoice images of each type is the same, the invoice image is added to the corresponding type invoice set, otherwise, the invoice image is added to the invoice type with a larger number.
5. The invoice image target detection and angle detection method based on deep learning as claimed in claim 2, characterized in that the deep network model is constructed based on fast-rcnn, a branch is newly added to the roiread part of the fast-rcnn during construction, the branch, a coordinate frame regression branch and an invoice classification branch average are used for directly regressing and predicting the rotation angle of the target region, the loss function corresponding to the branch adopts mean square error loss, normalization processing is performed on the real angle data and the predicted angle data during training, and the overall loss function is combined fast-rcnn network loss and angle regression branch loss;
when setting network parameters, calculating a pixel mean value mean and a standard deviation std of training image data of an RPNHead module in the family-rcnn, and regularizing the training data by using obtained image pixel statistical information; and modifying the classification category number of the box _ head and mask _ head modules in the master-rcnn, and setting the classification category number as the actual labeled invoice category number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379828.4A CN112347994B (en) | 2020-11-30 | 2020-11-30 | Invoice image target detection and angle detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379828.4A CN112347994B (en) | 2020-11-30 | 2020-11-30 | Invoice image target detection and angle detection method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112347994A true CN112347994A (en) | 2021-02-09 |
CN112347994B CN112347994B (en) | 2022-04-22 |
Family
ID=74366196
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011379828.4A Active CN112347994B (en) | 2020-11-30 | 2020-11-30 | Invoice image target detection and angle detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112347994B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115239700A (en) * | 2022-08-22 | 2022-10-25 | 北京医准智能科技有限公司 | Spine Cobb angle measurement method, device, equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170091950A1 (en) * | 2015-09-30 | 2017-03-30 | Fotonation Limited | Method and system for tracking an object |
CN107169488A (en) * | 2017-05-03 | 2017-09-15 | 四川长虹电器股份有限公司 | A kind of correction system and antidote of bill scan image |
CN108921166A (en) * | 2018-06-22 | 2018-11-30 | 深源恒际科技有限公司 | Medical bill class text detection recognition method and system based on deep neural network |
KR20190007290A (en) * | 2017-07-12 | 2019-01-22 | 동국대학교 산학협력단 | Device and method for multi-national banknote classification based on convolutional neural network |
CN109886257A (en) * | 2019-01-30 | 2019-06-14 | 四川长虹电器股份有限公司 | Using the method for deep learning correction invoice picture segmentation result in a kind of OCR system |
CN109977957A (en) * | 2019-03-04 | 2019-07-05 | 苏宁易购集团股份有限公司 | A kind of invoice recognition methods and system based on deep learning |
CN110674815A (en) * | 2019-09-29 | 2020-01-10 | 四川长虹电器股份有限公司 | Invoice image distortion correction method based on deep learning key point detection |
CN110689658A (en) * | 2019-10-08 | 2020-01-14 | 北京邮电大学 | Taxi bill identification method and system based on deep learning |
CN110991230A (en) * | 2019-10-25 | 2020-04-10 | 湖北富瑞尔科技有限公司 | Method and system for detecting ships by remote sensing images in any direction based on rotating candidate frame |
CN111626279A (en) * | 2019-10-15 | 2020-09-04 | 西安网算数据科技有限公司 | Negative sample labeling training method and highly-automated bill identification method |
CN111784587A (en) * | 2020-06-30 | 2020-10-16 | 杭州师范大学 | Invoice photo position correction method based on deep learning network |
-
2020
- 2020-11-30 CN CN202011379828.4A patent/CN112347994B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170091950A1 (en) * | 2015-09-30 | 2017-03-30 | Fotonation Limited | Method and system for tracking an object |
CN107169488A (en) * | 2017-05-03 | 2017-09-15 | 四川长虹电器股份有限公司 | A kind of correction system and antidote of bill scan image |
KR20190007290A (en) * | 2017-07-12 | 2019-01-22 | 동국대학교 산학협력단 | Device and method for multi-national banknote classification based on convolutional neural network |
CN108921166A (en) * | 2018-06-22 | 2018-11-30 | 深源恒际科技有限公司 | Medical bill class text detection recognition method and system based on deep neural network |
CN109886257A (en) * | 2019-01-30 | 2019-06-14 | 四川长虹电器股份有限公司 | Using the method for deep learning correction invoice picture segmentation result in a kind of OCR system |
CN109977957A (en) * | 2019-03-04 | 2019-07-05 | 苏宁易购集团股份有限公司 | A kind of invoice recognition methods and system based on deep learning |
CN110674815A (en) * | 2019-09-29 | 2020-01-10 | 四川长虹电器股份有限公司 | Invoice image distortion correction method based on deep learning key point detection |
CN110689658A (en) * | 2019-10-08 | 2020-01-14 | 北京邮电大学 | Taxi bill identification method and system based on deep learning |
CN111626279A (en) * | 2019-10-15 | 2020-09-04 | 西安网算数据科技有限公司 | Negative sample labeling training method and highly-automated bill identification method |
CN110991230A (en) * | 2019-10-25 | 2020-04-10 | 湖北富瑞尔科技有限公司 | Method and system for detecting ships by remote sensing images in any direction based on rotating candidate frame |
CN111784587A (en) * | 2020-06-30 | 2020-10-16 | 杭州师范大学 | Invoice photo position correction method based on deep learning network |
Non-Patent Citations (4)
Title |
---|
YUANYUAN ZHOU等: "《Rotational Objects Recognition and Angle Estimation via Kernel-Mapping CNN》", 《IEEE ACCESS》 * |
牛小明 等: "《图文识别技术综述》", 《中国体视学与图像分析》 * |
胡立: "《基于深度学习的高铁接触网绝缘子图像分析技术研究》", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
郑祖兵 等: "《双网络模型下的智能医疗票据识别方法》", 《计算机工程与应用》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115239700A (en) * | 2022-08-22 | 2022-10-25 | 北京医准智能科技有限公司 | Spine Cobb angle measurement method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112347994B (en) | 2022-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3699819A1 (en) | Apparatus and method for training classification model and apparatus for performing classification by using classification model | |
CN107067044B (en) | Financial reimbursement complete ticket intelligent auditing system | |
CN109409252A (en) | A kind of traffic multi-target detection method based on modified SSD network | |
CN109241894A (en) | A kind of specific aim ticket contents identifying system and method based on form locating and deep learning | |
CN101576956B (en) | On-line character detection method based on machine vision and system thereof | |
CN108764302B (en) | Bill image classification method based on color features and bag-of-words features | |
CN107704512A (en) | Financial product based on social data recommends method, electronic installation and medium | |
CN105261109A (en) | Identification method of prefix letter of banknote | |
CN110516664A (en) | Bill identification method and device, electronic equipment and storage medium | |
CN107784321A (en) | Numeral paints this method for quickly identifying, system and computer-readable recording medium | |
CN111767908B (en) | Character detection method, device, detection equipment and storage medium | |
CN112347994B (en) | Invoice image target detection and angle detection method based on deep learning | |
CN111881958A (en) | License plate classification recognition method, device, equipment and storage medium | |
CN112115934A (en) | Bill image text detection method based on deep learning example segmentation | |
CN114241469A (en) | Information identification method and device for electricity meter rotation process | |
CN112184679A (en) | YOLOv 3-based wine bottle flaw automatic detection method | |
CN113469005A (en) | Recognition method of bank receipt, related device and storage medium | |
CN116524297B (en) | Weak supervision learning training method based on expert feedback | |
CN109993171B (en) | License plate character segmentation method based on multiple templates and multiple proportions | |
CN111814801A (en) | Method for extracting labeled strings in mechanical diagram | |
Peng et al. | Real-time traffic sign text detection based on deep learning | |
CN116612479A (en) | Lightweight bill OCR (optical character recognition) method and system | |
Calefati et al. | Reading meter numbers in the wild | |
CN115424280A (en) | Handwritten digit detection method based on improved Faster-RCNN | |
CN115249319A (en) | Method for detecting sun dark stripes in full-sun-surface image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |