CN112686238B - Deep learning-based shipping bill identification method - Google Patents

Deep learning-based shipping bill identification method Download PDF

Info

Publication number
CN112686238B
CN112686238B CN202011517623.8A CN202011517623A CN112686238B CN 112686238 B CN112686238 B CN 112686238B CN 202011517623 A CN202011517623 A CN 202011517623A CN 112686238 B CN112686238 B CN 112686238B
Authority
CN
China
Prior art keywords
shipping bill
shipping
model
picture
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011517623.8A
Other languages
Chinese (zh)
Other versions
CN112686238A (en
Inventor
冯广辉
王雷
朱坚
陆向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujia Newland Software Engineering Co ltd
Original Assignee
Fujia Newland Software Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujia Newland Software Engineering Co ltd filed Critical Fujia Newland Software Engineering Co ltd
Priority to CN202011517623.8A priority Critical patent/CN112686238B/en
Publication of CN112686238A publication Critical patent/CN112686238A/en
Application granted granted Critical
Publication of CN112686238B publication Critical patent/CN112686238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a shipping bill identification method based on deep learning, which belongs to the technical field of shipping bill identification and comprises the following steps: s10, acquiring a shipping bill picture and marking to generate a marking data set; step S20, creating an aviation note type recognition model and a text positioning recognition model, and respectively training the two recognition models by using a labeling data set; step S30, identifying the shipping bill picture to be identified by using the trained shipping bill type identification model and the character positioning identification model, generating an electronic shipping bill, associating each electronic shipping bill with the shipping bill picture to be identified, and carrying out auxiliary correction on each unassociated shipping bill picture to be identified; and S40, marking the unassociated shipping bill pictures to be identified, adding the labeling data set, and training and optimizing the shipping bill type identification model and the character positioning identification model. The invention has the advantages that: the efficiency and the accuracy of the shipping bill identification are greatly improved.

Description

Deep learning-based shipping bill identification method
Technical Field
The invention relates to the technical field of shipping bill identification, in particular to a shipping bill identification method based on deep learning.
Background
With the development of informatization technology, more and more scenes need to use a computer or mobile equipment to conduct information recording, processing and other works, and the informatization technology can be utilized to save labor cost and improve working efficiency to a great extent.
The working efficiency can be improved by digitizing the shipping bill, but the shipping bill of part of links is not digitized due to different reasons of each country, so that the corresponding matched software cannot be used for automatic processing of the order from the links of reservation, confirmation, distribution, delivery and the like.
For the shipping bill which is not digitized, manual input is performed, and the input content comprises company information, order information, address information and the like in the shipping bill. However, because the number of the waybills is large, a large number of data entry operators are required to be input, the efficiency is low, the manual accuracy cannot be ensured, and the information loss of a plurality of waybills is often caused by system misjudgment.
Therefore, how to provide a shipping bill recognition method based on deep learning to achieve improvement of efficiency and accuracy of shipping bill recognition becomes a problem to be solved urgently.
Disclosure of Invention
The invention aims to solve the technical problem of providing a shipping bill identification method based on deep learning, which can improve the efficiency and accuracy of shipping bill identification.
The invention is realized in the following way: a shipping bill identification method based on deep learning comprises the following steps:
s10, acquiring a large number of shipping bill pictures, marking each shipping bill picture, and generating a marking data set;
step S20, creating a shipping bill type recognition model and a text positioning recognition model, and respectively training the shipping bill type recognition model and the text positioning recognition model by using the marking data set;
step S30, identifying the shipping bill picture to be identified by using the trained shipping bill type identification model and the character positioning identification model, generating an electronic shipping bill, associating each electronic shipping bill with the shipping bill picture to be identified, and carrying out auxiliary correction on each unassociated shipping bill picture to be identified;
and S40, marking the unassociated shipping bill pictures to be identified, adding the labeling data set, and training and optimizing the shipping bill type identification model and the text positioning identification model.
Further, the step S10 specifically includes:
s11, acquiring a large number of shipping bill pictures, preprocessing each shipping bill picture, and generating preprocessed pictures;
s12, extracting the feature vector of the preprocessed picture by using a SIFT algorithm;
step S13, classifying the preprocessed pictures based on the feature vectors by using a DBSCAN clustering algorithm;
and S14, manually labeling the interested areas of each preprocessed picture respectively to generate a labeling data set.
Further, in the step S11, the preprocessing of each of the shipping bill pictures specifically includes:
and carrying out gray level conversion and preprocessing of uniform size on each shipping bill picture.
Further, in the step S14, the region of interest includes at least a corporate LOGO region, an order number region, and an address information region.
Further, in the step S20, the model for identifying the type of the shipping bill adopts a YOLO framework, and the model for identifying the type of the shipping bill includes a dark net19 network module and an average pooling module; the activation function of the dark net19 network module adopts a tanh function.
Further, in the step S20, the text positioning recognition model includes a VGG19 network, a convolution conversion layer, a bidirectional LMST network, and a cnn+ctcsoss network;
the VGG19 network inputs the extracted characteristic images into a convolution conversion layer, the convolution conversion layer adjusts the sizes of the characteristic images, then inputs the characteristic images into a two-way LMST network for contextual information learning, positions the characters, and then inputs the characters to a CNN+CTCloss network for recognition.
Further, in the step S20, training the model for identifying the type of the shipping bill and the model for identifying the location of the text by using the labeling data set is specifically:
setting a proportion threshold value, and dividing the marked data set into a training set and a verification set according to a preset proportion; training the shipping bill type recognition model and the text positioning recognition model by using the training set, verifying the trained shipping bill type recognition model and text positioning recognition model by using the verification set, judging whether the recognition success rate is greater than the proportion threshold value, and if so, completing training; if not, the sample size of the training set is expanded, and training is continued.
Further, in the step S30, the performing auxiliary correction on each of the unassociated to-be-identified shipping bill pictures specifically includes:
and respectively calculating the similarity between the unassociated shipping bill pictures to be identified and the electronic shipping bill by using an edit distance algorithm, and selecting the electronic shipping bill with the highest similarity for manual auxiliary correction.
Further, the step S40 specifically includes:
labeling the unassociated areas with errors in the to-be-identified shipping bill pictures, adding the areas into the labeling data set, training and optimizing the shipping bill type identification model and the character positioning identification model by using the expanded labeling data set, and automatically replacing the shipping bill type identification model and the character positioning identification model with higher identification success rate.
The invention has the advantages that:
1. the method comprises the steps of generating a marked data set by marking a large number of shipping bill pictures, training a shipping bill type recognition model and a character positioning recognition model by using the marked data set, and recognizing the shipping bill pictures to be recognized by using the trained shipping bill type recognition model and the trained character positioning recognition model.
2. The feature vector of the preprocessed picture is extracted through the SIFT algorithm, the preprocessed picture is classified based on the feature vector by utilizing the DBSCAN clustering algorithm, and finally, the region of interest of the preprocessed picture after classification is used for manual marking, so that the workload of manual classification is saved, and the efficiency of marking the shipping bill picture is greatly improved.
3. The activation function of the dark Net19 network module is replaced by the tanh function to replace the traditional RELU function, so that negative characteristics are better reserved, the generalization capability of the shipping bill type recognition model is greatly improved, the information loss of peripheral characteristics is avoided, the integrity of characteristic information is ensured, the context of the shipping bill picture can be better contacted, and the recognition precision of the shipping bill type recognition model is further improved.
4. Through integrating VGG19 network and two-way LMST network for the text positioning recognition model can be better according to the context training of characteristic, and then improve the recognition accuracy of text positioning recognition model.
5. And respectively calculating the similarity of the unassociated shipping bill pictures to be identified and the electronic shipping bill through an edit distance algorithm, and selecting the electronic shipping bill with the highest similarity for manual auxiliary correction, so that the workload of error correction identification is greatly reduced compared with manual global searching and judgment.
6. And marking each unassociated shipping bill picture to be identified, expanding a marking data set, and training and optimizing the shipping bill type identification model and the character positioning identification model by utilizing the expanded marking data set, so that the identification precision of the shipping bill type identification model and the character positioning identification model and the adaptability of a new scene are further improved.
Drawings
The invention will be further described with reference to examples of embodiments with reference to the accompanying drawings.
FIG. 1 is a flow chart of a deep learning based method of identifying a shipping bill of the present invention.
Detailed Description
Referring to fig. 1, a preferred embodiment of a deep learning-based shipping bill recognition method of the present invention includes the following steps:
s10, acquiring a large number of shipping bill pictures, marking each shipping bill picture, and generating a marking data set;
step S20, creating a shipping bill type recognition model and a text positioning recognition model, and respectively training the shipping bill type recognition model and the text positioning recognition model by using the marking data set;
step S30, identifying the shipping bill picture to be identified by using the trained shipping bill type identification model and the character positioning identification model, generating an electronic shipping bill, associating each electronic shipping bill with the shipping bill picture to be identified, and carrying out auxiliary correction on each unassociated shipping bill picture to be identified;
and S40, marking the unassociated shipping bill pictures to be identified, adding the labeling data set, and training and optimizing the shipping bill type identification model and the text positioning identification model.
The traditional optical character recognition can not only recognize the region of interest, but also has low recognition accuracy.
The step S10 specifically includes:
s11, acquiring a large number of shipping bill pictures, preprocessing each shipping bill picture, and generating preprocessed pictures;
s12, extracting the feature vector of the preprocessed picture by using a SIFT algorithm;
step S13, classifying the preprocessed pictures based on the feature vectors by using a DBSCAN clustering algorithm;
and S14, manually labeling the interested areas of each preprocessed picture respectively to generate a labeling data set.
Because the image recognition based on deep learning requires a large amount of annotation data as training materials in the early stage, if a certain template image of a certain company is required to be manually searched as the training materials, the time and the labor are consumed, the specific number of the companies and the template type number of the companies cannot be determined, and the acquisition requirement of the training materials cannot be met, so that the feature vector of the preprocessed image is extracted through a SIFT algorithm, and the preprocessed image is classified based on the feature vector by utilizing a DBSCAN clustering algorithm.
In the step S11, the preprocessing of each shipping bill picture specifically includes:
and carrying out gray level conversion and preprocessing of uniform size on each shipping bill picture.
In the step S14, the region of interest includes at least a company LOGO region, an order number region, and an address information region.
In the step S20, the model for identifying the type of the shipping bill adopts a YOLO framework, and the model for identifying the type of the shipping bill includes a dark net19 network module and an average pooling module; the activation function of the dark net19 network module adopts a tanh function.
Before identifying the shipping bill, it is necessary to determine which shipping company the shipping bill belongs to and which specific type of shipping bill belongs to, so that it is necessary to locate the LOGO area by using the shipping bill type identification model and identify the LOGO.
Because the original YOLO frame has the defect of false recognition when recognizing company LOGO, the original YOLO frame is improved, namely, the traditional RELU function is replaced by the tanh function through the activation function of the dark net19 network module, negative characteristics are better reserved, the generalization capability of a shipping bill type recognition model is greatly improved, the information loss of peripheral characteristics is avoided by replacing the traditional maximum pooling module by the average pooling module, the integrity of characteristic information is ensured, the context of the shipping bill picture can be better contacted, and the recognition precision of the shipping bill type recognition model is further improved.
the formula of the tanh function is as follows:
the formula of the RELU function is as follows:
f(x)=max(0,x);
from the two formulas, the tanh function has better reservation for negative characteristic processing, so that the learning ability of the model is improved; the RELU function filters out values greater than 0, thus causing feature values less than 0 to fall all to 0, resulting in a small loss of model-learned feature information.
The formula of the average pooling module is as follows:
wherein val represents the eigenvalue after the average pooling operation; n and J denote the feature subscript positions of each treatment, f i Is the corresponding characteristic value.
In the step S20, the text positioning recognition model includes a VGG19 network, a convolution conversion layer, a Bi-directional LSTM network (Bi-directional LSTM), and a cnn+ctcsoss network;
the VGG19 network inputs the extracted characteristic images into a convolution conversion layer, the convolution conversion layer adjusts the sizes of the characteristic images, then inputs the characteristic images into a two-way LMST network for contextual information learning, positions the characters, and then inputs the characters to a CNN+CTCloss network for recognition. Through setting up the convolution conversion layer for the text positioning recognition model can learn more information in the training set, promotes text positioning effect.
Since the shipping bill picture contains a large amount of text information such as object quality, size, place of shipment, order generation date, etc., it is necessary to locate each piece of text information in the shipping bill picture and to distinguish areas of different text information. For example, the date information cannot be positioned in an information frame in a delivery area in the waybill picture, and each independent information needs to be defined by using an independent position frame, so that the subsequent identification is convenient. Considering that the image may contain context information, for example, a "VESSEL" word with a high frequency for "navigation number" and a "PORT OF load" word with a similar frequency for a navigation place during region positioning, a text positioning recognition model is set to perform positioning recognition on the text.
The formula for the bi-directional LMST network is as follows:
current cell state:
forgetting the door: f (f) t =σ(W f ·[h t-1 ,x t ]+b f );
Memory gate: i.e t =σ(W i ·[h t-1 ,x t ]+b i );
Wherein h is t-1 Represents the hidden layer state, x at the previous moment t Input word representing the current time, f t Output value i representing forgetting gate t Representing the output value of the memory gate,representing the state value of the temporary cells, C t-1 Representing the state of the cell at the previous time, C t Indicating the state of the cell at the current time.
The two-way LMST network functions in the text location recognition model are exemplified as follows:
assume that during the course of the training based on the shipping bill picture, if there is "order date" in the shipping bill picture: 10 months 2020 "," flight information: NORDPUMA "," commodity weight: 2100KG ", etc., and the VGG19 network extracts characteristics of" A1"," A2"," A3", respectively; the bidirectional LMST network is characterized by "B1", "B2", "B3" respectively represented by the vectors inputted by "A1", "A2", and "A3", "A2", and "A1" respectively represented by the vectors inputted by "A3", "A2", and "C3". The bi-directional LMST network is ultimately characterized by { [ "B1", "C1" ], [ "B2", "C2" ], [ "B3", "C3" ]. The method has the advantages that the front-back position relation and the content represented by the front-back position relation can be well learned in the waybill picture, and the identification accuracy can be well improved.
In the step S20, the training of the shipping bill type recognition model and the text positioning recognition model by using the labeling data set includes:
setting a proportion threshold value, and dividing the marked data set into a training set and a verification set according to a preset proportion; training the shipping bill type recognition model and the text positioning recognition model by using the training set, verifying the trained shipping bill type recognition model and text positioning recognition model by using the verification set, judging whether the recognition success rate is greater than the proportion threshold value, and if so, completing training; if not, the sample size of the training set is expanded, and training is continued.
In the step S30, the performing auxiliary correction on each unassociated shipping bill picture to be identified specifically includes:
and respectively calculating the similarity between the unassociated shipping bill pictures to be identified and the electronic shipping bill by using an edit distance algorithm, and selecting the electronic shipping bill with the highest similarity for manual auxiliary correction.
Because the good illumination condition can not be ensured and the definition of all images can not be ensured in the acquisition process of the waybill picture, a plurality of severely distorted images and images with poor illumination condition exist, and all the contents can not be successfully identified. For example, the actual order number is: "SHWW002637" may be identified as "SHWW002631" due to the above-described problem. Traditionally, unrecognized shipping bill pictures are required to be checked manually step by step, and information which is not matched is manually associated in a manual input mode, but the workload is large, the actual scene requirement cannot be met, and good timeliness cannot be ensured under the emergency condition, so that similarity is calculated by adopting an edit distance algorithm, and then manual auxiliary correction is performed.
The formula of the edit distance algorithm is as follows:
where i and j represent subscripts of strings a and b, respectively.
The step S40 specifically includes:
labeling the unassociated areas with errors in the to-be-identified shipping bill pictures, adding the areas into the labeling data set, training and optimizing the shipping bill type identification model and the character positioning identification model by using the expanded labeling data set, and automatically replacing the shipping bill type identification model and the character positioning identification model with higher identification success rate.
In summary, the invention has the advantages that:
1. the method comprises the steps of generating a marked data set by marking a large number of shipping bill pictures, training a shipping bill type recognition model and a character positioning recognition model by using the marked data set, and recognizing the shipping bill pictures to be recognized by using the trained shipping bill type recognition model and the trained character positioning recognition model.
2. The feature vector of the preprocessed picture is extracted through the SIFT algorithm, the preprocessed picture is classified based on the feature vector by utilizing the DBSCAN clustering algorithm, and finally, the region of interest of the preprocessed picture after classification is used for manual marking, so that the workload of manual classification is saved, and the efficiency of marking the shipping bill picture is greatly improved.
3. The activation function of the dark Net19 network module is replaced by the tanh function to replace the traditional RELU function, so that negative characteristics are better reserved, the generalization capability of the shipping bill type recognition model is greatly improved, the information loss of peripheral characteristics is avoided, the integrity of characteristic information is ensured, the context of the shipping bill picture can be better contacted, and the recognition precision of the shipping bill type recognition model is further improved.
4. Through integrating VGG19 network and two-way LMST network for the text positioning recognition model can be better according to the context training of characteristic, and then improve the recognition accuracy of text positioning recognition model.
5. And respectively calculating the similarity of the unassociated shipping bill pictures to be identified and the electronic shipping bill through an edit distance algorithm, and selecting the electronic shipping bill with the highest similarity for manual auxiliary correction, so that the workload of error correction identification is greatly reduced compared with manual global searching and judgment.
6. And marking each unassociated shipping bill picture to be identified, expanding a marking data set, and training and optimizing the shipping bill type identification model and the character positioning identification model by utilizing the expanded marking data set, so that the identification precision of the shipping bill type identification model and the character positioning identification model and the adaptability of a new scene are further improved.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that the specific embodiments described are illustrative only and not intended to limit the scope of the invention, and that equivalent modifications and variations of the invention in light of the spirit of the invention will be covered by the claims of the present invention.

Claims (8)

1. A shipping bill identification method based on deep learning is characterized in that: the method comprises the following steps:
s10, acquiring a large number of shipping bill pictures, marking each shipping bill picture, and generating a marking data set;
step S20, creating a shipping bill type recognition model and a text positioning recognition model, and respectively training the shipping bill type recognition model and the text positioning recognition model by using the marking data set;
the character positioning recognition model comprises a VGG19 network, a convolution conversion layer, a two-way LMST network and a CNN+CTCLoss network;
the VGG19 network inputs the extracted characteristic images into a convolution conversion layer, the convolution conversion layer adjusts the sizes of the characteristic images, then inputs the characteristic images into a two-way LMST network for contextual information learning, positions the characters, and then inputs a CNN+CTCloss network for identifying the positioned characters;
step S30, identifying the shipping bill picture to be identified by using the trained shipping bill type identification model and the character positioning identification model, generating an electronic shipping bill, associating each electronic shipping bill with the shipping bill picture to be identified, and carrying out auxiliary correction on each unassociated shipping bill picture to be identified;
and S40, marking the unassociated shipping bill pictures to be identified, adding the labeling data set, and training and optimizing the shipping bill type identification model and the text positioning identification model.
2. The deep learning-based shipping bill identification method of claim 1, wherein: the step S10 specifically includes:
s11, acquiring a large number of shipping bill pictures, preprocessing each shipping bill picture, and generating preprocessed pictures;
s12, extracting the feature vector of the preprocessed picture by using a SIFT algorithm;
step S13, classifying the preprocessed pictures based on the feature vectors by using a DBSCAN clustering algorithm;
and S14, manually labeling the interested areas of each preprocessed picture respectively to generate a labeling data set.
3. A deep learning based shipping bill identification method as defined in claim 2, wherein: in the step S11, the preprocessing of each shipping bill picture specifically includes:
and carrying out gray level conversion and preprocessing of uniform size on each shipping bill picture.
4. A deep learning based shipping bill identification method as defined in claim 2, wherein: in the step S14, the region of interest includes at least a company LOGO region, an order number region, and an address information region.
5. The deep learning-based shipping bill identification method of claim 1, wherein: in the step S20, the model for identifying the type of the shipping bill adopts a YOLO framework, and the model for identifying the type of the shipping bill includes a dark net19 network module and an average pooling module; the activation function of the dark net19 network module adopts a tanh function.
6. The deep learning-based shipping bill identification method of claim 1, wherein: in the step S20, the training of the shipping bill type recognition model and the text positioning recognition model by using the labeling data set includes:
setting a proportion threshold value, and dividing the marked data set into a training set and a verification set according to a preset proportion; training the shipping bill type recognition model and the text positioning recognition model by using the training set, verifying the trained shipping bill type recognition model and text positioning recognition model by using the verification set, judging whether the recognition success rate is greater than the proportion threshold value, and if so, completing training; if not, the sample size of the training set is expanded, and training is continued.
7. The deep learning-based shipping bill identification method of claim 1, wherein: in the step S30, the performing auxiliary correction on each unassociated shipping bill picture to be identified specifically includes:
and respectively calculating the similarity between the unassociated shipping bill pictures to be identified and the electronic shipping bill by using an edit distance algorithm, and selecting the electronic shipping bill with the highest similarity for manual auxiliary correction.
8. The deep learning-based shipping bill identification method of claim 1, wherein: the step S40 specifically includes:
labeling the unassociated areas with errors in the to-be-identified shipping bill pictures, adding the areas into the labeling data set, training and optimizing the shipping bill type identification model and the character positioning identification model by using the expanded labeling data set, and automatically replacing the shipping bill type identification model and the character positioning identification model with higher identification success rate.
CN202011517623.8A 2020-12-21 2020-12-21 Deep learning-based shipping bill identification method Active CN112686238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011517623.8A CN112686238B (en) 2020-12-21 2020-12-21 Deep learning-based shipping bill identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011517623.8A CN112686238B (en) 2020-12-21 2020-12-21 Deep learning-based shipping bill identification method

Publications (2)

Publication Number Publication Date
CN112686238A CN112686238A (en) 2021-04-20
CN112686238B true CN112686238B (en) 2023-07-21

Family

ID=75449692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011517623.8A Active CN112686238B (en) 2020-12-21 2020-12-21 Deep learning-based shipping bill identification method

Country Status (1)

Country Link
CN (1) CN112686238B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009057A (en) * 2019-04-16 2019-07-12 四川大学 A kind of graphical verification code recognition methods based on deep learning
CN110472581A (en) * 2019-08-16 2019-11-19 电子科技大学 A kind of cell image analysis method based on deep learning
CN111178345A (en) * 2019-05-20 2020-05-19 京东方科技集团股份有限公司 Bill analysis method, bill analysis device, computer equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009057A (en) * 2019-04-16 2019-07-12 四川大学 A kind of graphical verification code recognition methods based on deep learning
CN111178345A (en) * 2019-05-20 2020-05-19 京东方科技集团股份有限公司 Bill analysis method, bill analysis device, computer equipment and medium
CN110472581A (en) * 2019-08-16 2019-11-19 电子科技大学 A kind of cell image analysis method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于YOLO V2点选汉字验证码识别的研究;游贤;《中国优秀硕士学位论文全文数据库信息科技辑》(第2020年第2期期);第18-51页 *

Also Published As

Publication number Publication date
CN112686238A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN109902622B (en) Character detection and identification method for boarding check information verification
EP3432197B1 (en) Method and device for identifying characters of claim settlement bill, server and storage medium
CN112528963A (en) Intelligent arithmetic question reading system based on MixNet-YOLOv3 and convolutional recurrent neural network CRNN
CN109934255B (en) Model fusion method suitable for classification and identification of delivered objects of beverage bottle recycling machine
US20230029045A1 (en) Automatic image classification and processing method based on continuous processing structure of multiple artificial intelligence model, and computer program stored in computer-readable recording medium to execute the same
CN111914720B (en) Method and device for identifying insulator burst of power transmission line
CN113963147B (en) Key information extraction method and system based on semantic segmentation
CN113780087B (en) Postal package text detection method and equipment based on deep learning
CN111522951A (en) Sensitive data identification and classification technical method based on image identification
CN113011144A (en) Form information acquisition method and device and server
US20230215125A1 (en) Data identification method and apparatus
CN113449698A (en) Automatic paper document input method, system, device and storage medium
CN112464925A (en) Mobile terminal account opening data bank information automatic extraction method based on machine learning
CN111881958A (en) License plate classification recognition method, device, equipment and storage medium
CN110796210A (en) Method and device for identifying label information
CN116740723A (en) PDF document identification method based on open source Paddle framework
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN114463767A (en) Credit card identification method, device, computer equipment and storage medium
CN117437647A (en) Oracle character detection method based on deep learning and computer vision
CN112686238B (en) Deep learning-based shipping bill identification method
CN115546824B (en) Taboo picture identification method, apparatus and storage medium
CN112232288A (en) Satellite map target identification method and system based on deep learning
CN111950550A (en) Vehicle frame number identification system based on deep convolutional neural network
CN112950749B (en) Handwriting picture generation method based on generation countermeasure network
CN114637849B (en) Legal relation cognition method and system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant