CN112686238A - Deep learning-based shipping bill identification method - Google Patents
Deep learning-based shipping bill identification method Download PDFInfo
- Publication number
- CN112686238A CN112686238A CN202011517623.8A CN202011517623A CN112686238A CN 112686238 A CN112686238 A CN 112686238A CN 202011517623 A CN202011517623 A CN 202011517623A CN 112686238 A CN112686238 A CN 112686238A
- Authority
- CN
- China
- Prior art keywords
- shipping
- shipping bill
- bill
- picture
- recognition model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a shipping bill identification method based on deep learning, which belongs to the technical field of shipping bill identification and comprises the following steps: s10, acquiring and labeling shipping list pictures to generate a labeled data set; step S20, creating a shipping bill type recognition model and a character positioning recognition model, and respectively training the two recognition models by using a labeling data set; step S30, identifying the shipping note pictures to be identified by using the trained shipping note type identification model and the character positioning identification model to generate electronic shipping notes, associating each electronic shipping note with the shipping note pictures to be identified, and performing auxiliary correction on each unassociated shipping note picture to be identified; and step S40, labeling each unassociated shipping bill picture to be identified, adding a labeling data set, and training and optimizing a shipping bill type identification model and a character positioning identification model. The invention has the advantages that: the efficiency and the rate of accuracy of shipping list discernment have very big promotion.
Description
Technical Field
The invention relates to the technical field of shipping bill identification, in particular to a shipping bill identification method based on deep learning.
Background
With the development of the informatization technology, more and more scenes need to use computers or mobile equipment to perform work such as information acquisition, recording, processing and the like, and the informatization technology can be utilized to save the labor cost and improve the working efficiency to a great extent.
The work efficiency can also be improved by digitalizing the shipping bill, but due to different reasons of each country, the shipping bill in some links is not digitalized, so that the order form cannot be automatically processed by using corresponding matched software in the links of reservation, confirmation, distribution, delivery and the like.
For an itinerary which is not digitized, the traditional method is to perform manual entry, and the entered content includes company information, order information, address information and the like in the itinerary. However, because the number of the shipping orders is large, a large number of data entry personnel are required to be invested, the efficiency is low, the manual accuracy cannot be guaranteed, and the information loss of a plurality of shipping orders due to system misjudgment often occurs.
Therefore, how to provide a shipping bill identification method based on deep learning to improve the efficiency and accuracy of shipping bill identification becomes a problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a shipping bill identification method based on deep learning, so that the efficiency and the accuracy of shipping bill identification are improved.
The invention is realized by the following steps: a shipping bill identification method based on deep learning comprises the following steps:
s10, acquiring a large number of shipping bill pictures, labeling each shipping bill picture, and generating a labeled data set;
step S20, creating a shipping bill type recognition model and a character positioning recognition model, and respectively training the shipping bill type recognition model and the character positioning recognition model by using the labeled data set;
step S30, identifying the shipping bill pictures to be identified by using the trained shipping bill type identification model and the character positioning identification model to generate electronic shipping bills, associating each electronic shipping bill with the shipping bill pictures to be identified, and performing auxiliary correction on each unassociated shipping bill picture to be identified;
and step S40, labeling each unassociated shipping list picture to be identified, adding the labeled data set, and training and optimizing the shipping list type identification model and the character positioning identification model.
Further, the step S10 specifically includes:
s11, acquiring a large number of shipping bill pictures, and preprocessing each shipping bill picture to generate a preprocessed picture;
s12, extracting the feature vector of the preprocessed picture by utilizing an SIFT algorithm;
step S13, classifying the preprocessed pictures based on the feature vectors by using a DBSCAN clustering algorithm;
and step S14, manually labeling the interested areas of the preprocessed pictures of each category respectively to generate a labeled data set.
Further, in the step S11, the preprocessing each waybill picture specifically includes:
and carrying out gray level conversion and preprocessing with uniform size on each shipping list picture.
Further, in step S14, the region of interest at least includes a company LOGO region, an order number region, and an address information region.
Further, in the step S20, the shipping bill type recognition model adopts the YOLO framework, and the shipping bill type recognition model includes a darkNet19 network module and an average pooling module; the activation function of the darkNet19 network module adopts a tanh function.
Further, in the step S20, the word location identification model includes a VGG19 network, a convolution translation layer, a bidirectional LMST network, and a CNN + CTCloss network;
the VGG19 network inputs the extracted feature image into a convolution conversion layer, the convolution conversion layer adjusts the size of the feature image, then inputs the feature image into a bidirectional LMST network for context information learning, positions the characters, and then inputs the characters into a CNN + CTCloss network for identifying the positioned characters.
Further, in the step S20, the training of the shipping bill type recognition model and the character positioning recognition model by using the labeled data set specifically includes:
setting a proportional threshold, and dividing the labeled data set into a training set and a verification set according to a preset proportion; training a shipping bill type recognition model and a character positioning recognition model by using the training set, verifying the trained shipping bill type recognition model and the trained character positioning recognition model by using the verification set, judging whether the recognition success rate is greater than the proportional threshold value or not, and finishing the training if the recognition success rate is greater than the proportional threshold value; if not, expanding the sample size of the training set and continuing training.
Further, in the step S30, the performing auxiliary correction on each unassociated waybill picture to be identified specifically includes:
and respectively calculating the similarity between each unrelated shipping note picture to be identified and the electronic shipping note by using an edit distance algorithm, and selecting the electronic shipping note with the highest similarity to carry out artificial auxiliary correction.
Further, the step S40 is specifically:
and marking the areas with identification errors in the unassociated shipping list pictures to be identified, adding the marked data sets, training and optimizing a shipping list type identification model and a character positioning identification model by using the expanded marked data sets, and automatically replacing the shipping list type identification model and the character positioning identification model with higher identification success rate.
The invention has the advantages that:
1. the method comprises the steps of generating a labeling data set by labeling a large number of shipping bill pictures, training a shipping bill type recognition model and a character positioning recognition model respectively by using the labeling data set, recognizing the shipping bill pictures to be recognized by using the trained shipping bill type recognition model and character positioning recognition model, and greatly improving the efficiency and accuracy of shipping bill recognition compared with the traditional method of recognizing shipping bills through artificial naked eyes.
2. The feature vectors of the preprocessed pictures are extracted through a SIFT algorithm, the DBSCAN clustering algorithm is utilized, the preprocessed pictures are classified based on the feature vectors, and finally the regions of interest of the classified preprocessed pictures are utilized for manual labeling, so that the workload of manual classification is saved, and the efficiency of labeling the shipping single pictures is greatly improved.
3. Through the activation function with dark Net19 network module replace traditional RELU function by tanh function, have better reservation to the negative value characteristic, very big promotion the generalization ability of waybill type identification model, through replacing traditional maximum value pooling module with average pooling module, avoid the information of peripheral characteristic to lose, guarantee the integrality of characteristic information, the context of the waybill picture of contact that can be better, and then improve the identification accuracy of waybill type identification model.
4. By integrating the VGG19 network and the bidirectional LMST network, the character positioning recognition model can be trained better according to the context of the characteristics, and the recognition accuracy of the character positioning recognition model is further improved.
5. And respectively calculating the similarity between each unrelated shipping note picture to be identified and the electronic shipping note through an edit distance algorithm, selecting the electronic shipping note with the highest similarity to carry out manual auxiliary correction, and greatly reducing the workload of identification error correction compared with manual global search and judgment.
6. The method comprises the steps of marking each unassociated shipping list picture to be identified, expanding a marking data set, and utilizing the expanded marking data set to train and optimize a shipping list type identification model and a character positioning identification model, so that the identification accuracy of the shipping list type identification model and the character positioning identification model and the adaptability of a new scene are further improved.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for identifying a shipping bill based on deep learning according to the present invention.
Detailed Description
Referring to fig. 1, a preferred embodiment of a shipping bill identification method based on deep learning according to the present invention includes the following steps:
s10, acquiring a large number of shipping bill pictures, labeling each shipping bill picture, and generating a labeled data set;
step S20, creating a shipping bill type recognition model and a character positioning recognition model, and respectively training the shipping bill type recognition model and the character positioning recognition model by using the labeled data set;
step S30, identifying the shipping bill pictures to be identified by using the trained shipping bill type identification model and the character positioning identification model to generate electronic shipping bills, associating each electronic shipping bill with the shipping bill pictures to be identified, and performing auxiliary correction on each unassociated shipping bill picture to be identified;
and step S40, labeling each unassociated shipping list picture to be identified, adding the labeled data set, and training and optimizing the shipping list type identification model and the character positioning identification model.
The traditional optical character recognition cannot only recognize the region of interest, the recognition accuracy rate is not high, and the defects of the optical character recognition can be overcome.
The step S10 specifically includes:
s11, acquiring a large number of shipping bill pictures, and preprocessing each shipping bill picture to generate a preprocessed picture;
s12, extracting the feature vector of the preprocessed picture by utilizing an SIFT algorithm;
step S13, classifying the preprocessed pictures based on the feature vectors by using a DBSCAN clustering algorithm;
and step S14, manually labeling the interested areas of the preprocessed pictures of each category respectively to generate a labeled data set.
Because a large amount of labeled data is needed to be used as a training material in the early stage of image recognition based on deep learning, if some template images of some companies are needed to be manually searched to be used as the training material, time and labor are consumed, the specific number of companies and the number of the template types of the companies cannot be determined, and the acquisition requirement of the training material cannot be met, so that the feature vectors of the preprocessed pictures are extracted through a SIFT algorithm, and the preprocessed pictures are classified based on the feature vectors by utilizing a DBSCAN clustering algorithm.
In the step S11, the preprocessing of each waybill picture specifically includes:
and carrying out gray level conversion and preprocessing with uniform size on each shipping list picture.
In step S14, the region of interest at least includes a company LOGO region, an order number region, and an address information region.
In step S20, the shipping bill type recognition model adopts YOLO framework, and the shipping bill type recognition model includes a darkNet19 network module and an average pooling module; the activation function of the darkNet19 network module adopts a tanh function.
Before the shipping note is identified, it is necessary to determine which shipping company the shipping note belongs to and which type of shipping note belongs to under the company, and therefore, it is necessary to locate the LOGO region by using a shipping note type identification model and identify the LOGO.
Because original YOLO frame when discerning company LOGO, there is the defect of misidentification, consequently improve original YOLO frame, through the activation function with dark Net19 network module replace traditional RELU function by tanh function, there is better reservation to the negative value characteristic, very big promotion the generalization ability of shipping list type identification model, through replacing traditional maximum value pooling module with average pooling module, avoid the information loss of peripheral characteristic, guarantee the integrality of characteristic information, the context of contact shipping list picture that can be better, and then improve the identification accuracy of shipping list type identification model.
the formula for the tanh function is as follows:
the formula for the RELU function is as follows:
f(x)=max(0,x);
from the two formulas, the tanh function has better reservation for the negative characteristic processing, so that the learnability of the model is improved; the RELU function filters out values greater than 0, thus causing the eigenvalues less than 0 to be all 0, resulting in a small amount of loss of model-learned feature information.
The formula for the average pooling module is as follows:
wherein val represents the eigenvalue after the average pooling operation; n and J denote the characteristic subscript position per treatment, fiAre the corresponding characteristic values.
In step S20, the character positioning and recognition model includes a VGG19 network, a convolution translation layer, a Bi-directional LSTM network and a CNN + ctcoss network;
the VGG19 network inputs the extracted feature image into a convolution conversion layer, the convolution conversion layer adjusts the size of the feature image, then inputs the feature image into a bidirectional LMST network for context information learning, positions the characters, and then inputs the characters into a CNN + CTCloss network for identifying the positioned characters. Through setting up the convolution conversion layer for the character positioning identification model can learn more information in the training set, promotes the character positioning effect.
Since the shipping note picture contains a large amount of text information, such as object quality, size, shipping place, order generation date, etc., it is necessary to locate each piece of text information in the shipping note picture and to distinguish areas of different text information. For example, the delivery area in the shipping note picture cannot position date information in an information frame, and each piece of independent information needs to be framed by an independent position, so that subsequent identification is facilitated. Considering that the image may contain context information, for example, a "VESSEL" word with a high frequency appears for a "navigation number" during area positioning, and a similar "PORT OF LOADING" word appears for a shipping place, a character positioning and recognition model is set to perform positioning and recognition on characters.
The formula for a bi-directional LMST network is as follows:
forget the door: f. oft=σ(Wf·[ht-1,xt]+bf);
A memory gate: i.e. it=σ(Wi·[ht-1,xt]+bi);
Wherein h ist-1Representing hidden states of the previous moment, xtInput word representing the current time, ftOutput value, i, representing a forgetting gatetWhich represents the output value of the memory gate,indicates the status value of the provisional cell, Ct-1Indicates the state of the cell at the previous time, CtIndicating the cell state at the current time.
The role of the bi-directional LMST network in the text position recognition model is exemplified by the following:
assume that during the course of the shipping order picture-based training, if there is an "order date: 10/2020 "," flight information: NORDPUMA "," commercial weight: 2100KG ' and other information, the characteristics extracted by the VGG19 network are respectively ' A1 ', ' A2 ', ' A3 '; the bidirectional LMST network respectively takes the vectors of "a 1", "a 2" and "A3" as input and then the features of the vectors after forward calculation are "B1", "B2" and "B3", and simultaneously takes the vectors of "A3", "a 2" and "a 1" as input and then the features of the vectors after backward calculation are "C1", "C2" and "C3". The bi-directional LMST network is ultimately characterized by { [ "B1", "C1" ], [ "B2", "C2" ], [ "B3", "C3" ] }. The method has the advantages that the position relation before and after the shipping bill picture and the content represented by the position relation can be well learned in the shipping bill picture, and the identification accuracy can be well improved.
In the step S20, the training of the shipping bill type recognition model and the character positioning recognition model by using the labeled data set specifically includes:
setting a proportional threshold, and dividing the labeled data set into a training set and a verification set according to a preset proportion; training a shipping bill type recognition model and a character positioning recognition model by using the training set, verifying the trained shipping bill type recognition model and the trained character positioning recognition model by using the verification set, judging whether the recognition success rate is greater than the proportional threshold value or not, and finishing the training if the recognition success rate is greater than the proportional threshold value; if not, expanding the sample size of the training set and continuing training.
In the step S30, the performing auxiliary correction on each unassociated waybill picture to be identified specifically includes:
and respectively calculating the similarity between each unrelated shipping note picture to be identified and the electronic shipping note by using an edit distance algorithm, and selecting the electronic shipping note with the highest similarity to carry out artificial auxiliary correction.
In the acquisition process of the shipping bill picture, better illumination conditions cannot be guaranteed, and the definition of all images cannot be guaranteed, so that some seriously distorted images and images with poorer illumination conditions exist, and all contents cannot be successfully identified. For example, the actual order number is: "SHWW 002637", which may be identified as "SHWW 002631" due to the above-mentioned problems. Traditionally, unidentified shipping list pictures need to be checked manually step by step, and information which is not matched is manually associated in a manual input mode, but the workload is large, the actual scene requirements cannot be met, and good timeliness cannot be guaranteed under the emergency condition, so that the similarity is calculated by adopting an edit distance algorithm, and then manual auxiliary correction is carried out.
The formula for the edit distance algorithm is as follows:
where i and j represent the subscripts of the strings a and b, respectively.
The step S40 specifically includes:
and marking the areas with identification errors in the unassociated shipping list pictures to be identified, adding the marked data sets, training and optimizing a shipping list type identification model and a character positioning identification model by using the expanded marked data sets, and automatically replacing the shipping list type identification model and the character positioning identification model with higher identification success rate.
In summary, the invention has the advantages that:
1. the method comprises the steps of generating a labeling data set by labeling a large number of shipping bill pictures, training a shipping bill type recognition model and a character positioning recognition model respectively by using the labeling data set, recognizing the shipping bill pictures to be recognized by using the trained shipping bill type recognition model and character positioning recognition model, and greatly improving the efficiency and accuracy of shipping bill recognition compared with the traditional method of recognizing shipping bills through artificial naked eyes.
2. The feature vectors of the preprocessed pictures are extracted through a SIFT algorithm, the DBSCAN clustering algorithm is utilized, the preprocessed pictures are classified based on the feature vectors, and finally the regions of interest of the classified preprocessed pictures are utilized for manual labeling, so that the workload of manual classification is saved, and the efficiency of labeling the shipping single pictures is greatly improved.
3. Through the activation function with dark Net19 network module replace traditional RELU function by tanh function, have better reservation to the negative value characteristic, very big promotion the generalization ability of waybill type identification model, through replacing traditional maximum value pooling module with average pooling module, avoid the information of peripheral characteristic to lose, guarantee the integrality of characteristic information, the context of the waybill picture of contact that can be better, and then improve the identification accuracy of waybill type identification model.
4. By integrating the VGG19 network and the bidirectional LMST network, the character positioning recognition model can be trained better according to the context of the characteristics, and the recognition accuracy of the character positioning recognition model is further improved.
5. And respectively calculating the similarity between each unrelated shipping note picture to be identified and the electronic shipping note through an edit distance algorithm, selecting the electronic shipping note with the highest similarity to carry out manual auxiliary correction, and greatly reducing the workload of identification error correction compared with manual global search and judgment.
6. The method comprises the steps of marking each unassociated shipping list picture to be identified, expanding a marking data set, and utilizing the expanded marking data set to train and optimize a shipping list type identification model and a character positioning identification model, so that the identification accuracy of the shipping list type identification model and the character positioning identification model and the adaptability of a new scene are further improved.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.
Claims (9)
1. A shipping bill identification method based on deep learning is characterized in that: the method comprises the following steps:
s10, acquiring a large number of shipping bill pictures, labeling each shipping bill picture, and generating a labeled data set;
step S20, creating a shipping bill type recognition model and a character positioning recognition model, and respectively training the shipping bill type recognition model and the character positioning recognition model by using the labeled data set;
step S30, identifying the shipping bill pictures to be identified by using the trained shipping bill type identification model and the character positioning identification model to generate electronic shipping bills, associating each electronic shipping bill with the shipping bill pictures to be identified, and performing auxiliary correction on each unassociated shipping bill picture to be identified;
and step S40, labeling each unassociated shipping list picture to be identified, adding the labeled data set, and training and optimizing the shipping list type identification model and the character positioning identification model.
2. The deep learning-based shipping bill identification method of claim 1, wherein: the step S10 specifically includes:
s11, acquiring a large number of shipping bill pictures, and preprocessing each shipping bill picture to generate a preprocessed picture;
s12, extracting the feature vector of the preprocessed picture by utilizing an SIFT algorithm;
step S13, classifying the preprocessed pictures based on the feature vectors by using a DBSCAN clustering algorithm;
and step S14, manually labeling the interested areas of the preprocessed pictures of each category respectively to generate a labeled data set.
3. The deep learning-based shipping bill identification method of claim 2, wherein: in the step S11, the preprocessing of each waybill picture specifically includes:
and carrying out gray level conversion and preprocessing with uniform size on each shipping list picture.
4. The deep learning-based shipping bill identification method of claim 2, wherein: in step S14, the region of interest at least includes a company LOGO region, an order number region, and an address information region.
5. The deep learning-based shipping bill identification method of claim 1, wherein: in step S20, the shipping bill type recognition model adopts YOLO framework, and the shipping bill type recognition model includes a darkNet19 network module and an average pooling module; the activation function of the darkNet19 network module adopts a tanh function.
6. The deep learning-based shipping bill identification method of claim 1, wherein: in step S20, the character location recognition model includes a VGG19 network, a convolution translation layer, a bidirectional LMST network, and a CNN + CTCloss network;
the VGG19 network inputs the extracted feature image into a convolution conversion layer, the convolution conversion layer adjusts the size of the feature image, then inputs the feature image into a bidirectional LMST network for context information learning, positions the characters, and then inputs the characters into a CNN + CTCloss network for identifying the positioned characters.
7. The deep learning-based shipping bill identification method of claim 1, wherein: in the step S20, the training of the shipping bill type recognition model and the character positioning recognition model by using the labeled data set specifically includes:
setting a proportional threshold, and dividing the labeled data set into a training set and a verification set according to a preset proportion; training a shipping bill type recognition model and a character positioning recognition model by using the training set, verifying the trained shipping bill type recognition model and the trained character positioning recognition model by using the verification set, judging whether the recognition success rate is greater than the proportional threshold value or not, and finishing the training if the recognition success rate is greater than the proportional threshold value; if not, expanding the sample size of the training set and continuing training.
8. The deep learning-based shipping bill identification method of claim 1, wherein: in the step S30, the performing auxiliary correction on each unassociated waybill picture to be identified specifically includes:
and respectively calculating the similarity between each unrelated shipping note picture to be identified and the electronic shipping note by using an edit distance algorithm, and selecting the electronic shipping note with the highest similarity to carry out artificial auxiliary correction.
9. The deep learning-based shipping bill identification method of claim 1, wherein: the step S40 specifically includes:
and marking the areas with identification errors in the unassociated shipping list pictures to be identified, adding the marked data sets, training and optimizing a shipping list type identification model and a character positioning identification model by using the expanded marked data sets, and automatically replacing the shipping list type identification model and the character positioning identification model with higher identification success rate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011517623.8A CN112686238B (en) | 2020-12-21 | 2020-12-21 | Deep learning-based shipping bill identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011517623.8A CN112686238B (en) | 2020-12-21 | 2020-12-21 | Deep learning-based shipping bill identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112686238A true CN112686238A (en) | 2021-04-20 |
CN112686238B CN112686238B (en) | 2023-07-21 |
Family
ID=75449692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011517623.8A Active CN112686238B (en) | 2020-12-21 | 2020-12-21 | Deep learning-based shipping bill identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112686238B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009057A (en) * | 2019-04-16 | 2019-07-12 | 四川大学 | A kind of graphical verification code recognition methods based on deep learning |
CN110472581A (en) * | 2019-08-16 | 2019-11-19 | 电子科技大学 | A kind of cell image analysis method based on deep learning |
CN111178345A (en) * | 2019-05-20 | 2020-05-19 | 京东方科技集团股份有限公司 | Bill analysis method, bill analysis device, computer equipment and medium |
-
2020
- 2020-12-21 CN CN202011517623.8A patent/CN112686238B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009057A (en) * | 2019-04-16 | 2019-07-12 | 四川大学 | A kind of graphical verification code recognition methods based on deep learning |
CN111178345A (en) * | 2019-05-20 | 2020-05-19 | 京东方科技集团股份有限公司 | Bill analysis method, bill analysis device, computer equipment and medium |
CN110472581A (en) * | 2019-08-16 | 2019-11-19 | 电子科技大学 | A kind of cell image analysis method based on deep learning |
Non-Patent Citations (1)
Title |
---|
游贤: "基于YOLO V2点选汉字验证码识别的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2020, pages 18 - 51 * |
Also Published As
Publication number | Publication date |
---|---|
CN112686238B (en) | 2023-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109902622B (en) | Character detection and identification method for boarding check information verification | |
CN112052852B (en) | Character recognition method of handwriting meteorological archive data based on deep learning | |
CN112651289B (en) | Value-added tax common invoice intelligent recognition and verification system and method thereof | |
CN109934255B (en) | Model fusion method suitable for classification and identification of delivered objects of beverage bottle recycling machine | |
WO2020038138A1 (en) | Sample labeling method and device, and damage category identification method and device | |
CN110796131A (en) | Chinese character writing evaluation system | |
CN113780087B (en) | Postal package text detection method and equipment based on deep learning | |
CN113011144A (en) | Form information acquisition method and device and server | |
CN111881958A (en) | License plate classification recognition method, device, equipment and storage medium | |
CN112464925A (en) | Mobile terminal account opening data bank information automatic extraction method based on machine learning | |
CN111652117B (en) | Method and medium for segmenting multiple document images | |
CN110796210A (en) | Method and device for identifying label information | |
CN111340032A (en) | Character recognition method based on application scene in financial field | |
CN114972880A (en) | Label identification method and device, electronic equipment and storage medium | |
CN114463767A (en) | Credit card identification method, device, computer equipment and storage medium | |
CN111914706B (en) | Method and device for detecting and controlling quality of text detection output result | |
CN117437647A (en) | Oracle character detection method based on deep learning and computer vision | |
CN111414917A (en) | Identification method of low-pixel-density text | |
CN112686238B (en) | Deep learning-based shipping bill identification method | |
CN116363655A (en) | Financial bill identification method and system | |
CN112232288A (en) | Satellite map target identification method and system based on deep learning | |
CN111950550A (en) | Vehicle frame number identification system based on deep convolutional neural network | |
CN114637849B (en) | Legal relation cognition method and system based on artificial intelligence | |
CN112950749B (en) | Handwriting picture generation method based on generation countermeasure network | |
CN116229493B (en) | Cross-modal picture text named entity recognition method and system and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |