CN112101343A - License plate character segmentation and recognition method - Google Patents

License plate character segmentation and recognition method Download PDF

Info

Publication number
CN112101343A
CN112101343A CN202010828399.8A CN202010828399A CN112101343A CN 112101343 A CN112101343 A CN 112101343A CN 202010828399 A CN202010828399 A CN 202010828399A CN 112101343 A CN112101343 A CN 112101343A
Authority
CN
China
Prior art keywords
layer
license plate
character
characters
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010828399.8A
Other languages
Chinese (zh)
Inventor
高天
张波
林志洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010828399.8A priority Critical patent/CN112101343A/en
Publication of CN112101343A publication Critical patent/CN112101343A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Character Input (AREA)
  • Character Discrimination (AREA)

Abstract

The invention discloses a license plate character segmentation and recognition method, which comprises the following steps: s1, preprocessing a license plate image; s2, removing the spacer, removing the upper frame and the lower frame, and positioning the coordinate of the right end of the second character; s3, segmenting the characters and normalizing the characters; s4, dividing the acquired data set into a training set and a testing set; s5, constructing a convolutional neural network suitable for license plate character image recognition; s6, selecting training parameters, and training the set network by using a training set; and S7, testing the trained network by using the test set to obtain the accuracy of the license plate recognition network. The method can avoid the problem of character segmentation failure caused by character fracture, solve the problem of overfitting caused by less training samples, and further improve the convergence rate and the generalization capability of the model.

Description

License plate character segmentation and recognition method
Technical Field
The invention relates to the technical field of license plate recognition, in particular to a license plate character segmentation and recognition method.
Background
License plate number is an important mark of a vehicle, so that the license plate recognition technology has important significance for traffic control, and the license plate recognition system is an important link for building an intelligent traffic system.
The traditional character segmentation has a connected domain method and a vertical projection method, which have good effect on character segmentation, but when characters such as 'Chuan', 'Zhe', etc. are broken in a license plate, the connected domain method and the projection method can wrongly divide the 'Chuan' characters into 3 '1'.
The traditional template matching license plate recognition method depends on the matching degree of template characters and characters to be detected, strict requirements are imposed on the definition of an original image, and the recognition effect of the method is not good. In machine learning, a recognition algorithm based on a support vector machine is a relatively classical algorithm. This method is robust, but it over-emphasizes the selection of character features.
At present, the license plate recognition algorithm is continuously improved and optimized on the basis of a convolutional neural network. The prior art reduces training errors by increasing the complexity of the model, such as increasing the number of feature maps. But still using complex models with fewer training samples, the overfitting problem is likely to occur.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a license plate character segmentation and recognition method which can avoid the problem of character segmentation failure caused by character fracture, solve the problem of overfitting caused by fewer training samples and further improve the convergence speed and the generalization capability of a model.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a license plate character segmentation and recognition method comprises the following steps:
s1, preprocessing a license plate image;
s2, removing the spacer, removing the upper frame and the lower frame, and positioning the coordinate of the right end of the second character;
s3, segmenting the characters and normalizing the characters;
s4, dividing the acquired data set into a training set and a testing set;
s5, constructing a convolutional neural network suitable for license plate character image recognition;
s6, selecting training parameters, and training the set network by using a training set;
and S7, testing the trained network by using the test set to obtain the accuracy of the license plate recognition network.
Further, the step S1 specifically includes:
s11, carrying out gray-scale operation on the color license plate image;
and S12, carrying out binarization operation on the gray-scale image by using the Otsu method.
Further, the step S2 specifically includes:
s21, removing the area with the area smaller than 20 by using open operation;
s22, counting the jumping times of each row of pixels from 1 to 0 or from 0 to 1 to obtain a jumping time statistical graph; searching upwards from 1/3 of the total row number to obtain a row coordinate x1 of which the jumping times are less than or equal to 13; searching downwards from 2/3 of the total row number to obtain row coordinate x2 with jump number less than or equal to 13, and removing the region outside the interval [ x1, x2 ]; obtaining the character height x (x2-x1), the width of a single character a x (45/90), the character spacing b x (12/90), the second and third character spacing c x (34/90);
s23, utilizing the characteristic that the distance between the second character and the third character is the largest, and positioning the coordinate z1 at the right end of the second character by using a vertical projection method.
Further, the step S3 specifically includes:
s31, calculating the offset of the 9 segmentation points y 1-y 9 relative to z1 by using x, z1, a, b and c, and then calculating the coordinate of each segmentation point through the offset;
s32, segmenting 7 characters by 9 segmenting points;
and S33, normalizing the segmented characters, and converting the image size into 30 x 24.
Further, the convolutional neural network in step S5 includes a convolutional layer, a Batch normalization layer, a Relu activation layer, a max pooling layer, a Dropout layer, a full link layer, and a Softmax layer. Wherein, the receptive field of the convolutional layer is the adjacent neuron area of the previous layer and is used for extracting the characteristics of the image; the Batch normalization layer is used for enhancing the generalization capability of the network and improving the precision of the network model; the Relu activation layer enables the network model to approach any function; the maximum pooling layer is used for reducing the dimensionality of the data; the Dropout layer is used for inhibiting the overfitting phenomenon; the full connection layer is used for linear transformation; the Softmax layer may derive a multi-dimensional column vector for classification.
Further, the specific structure of the convolutional neural network is as follows: the device comprises an input layer, a first volume layer, a first Batch normalization layer, a first Relu activation layer, a first maximum pooling layer, a second volume layer, a second Batch normalization layer, a second Relu activation layer, a second maximum pooling layer, a third volume layer, a third Batch normalization layer, a third Relu activation layer, a third maximum pooling layer, a first Dropout layer, a first full-link layer, a second full-link layer and a first Softmax layer.
Compared with the prior art, the principle and the advantages of the scheme are as follows:
1) the characters are segmented according to the space positions of the 7 characters, and the problem of character segmentation failure caused by character fracture is avoided.
2) And a convolutional neural network is selected, so that complex feature extraction is avoided, and the robustness is enhanced.
3) The convolutional neural network model is simplified, the random inactivation probability of the Dropout layer is continuously adjusted through a plurality of experiments by adding the Dropout layer and the Batch normalization layer, the optimal value is obtained, the problems of overfitting, convergence speed and the like are solved, and the accuracy and the convergence speed are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the services required for the embodiments or the technical solutions in the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a license plate character segmentation and recognition method according to the present invention;
FIG. 2 is a schematic diagram of the embodiment of the present invention with upper and lower frames removed;
FIG. 3 is a diagram illustrating character segmentation according to an embodiment of the present invention;
FIG. 4 is a block diagram of a convolutional neural network used in an embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples:
as shown in fig. 1, the method for performing character segmentation and recognition on a license plate cloud DG7327 in this embodiment includes the following steps:
s1, preprocessing a license plate image, which specifically comprises the following steps:
s11, carrying out gray-scale operation on the color license plate image;
and S12, carrying out binarization operation on the gray-scale image by using the Otsu method.
S2, removing the spacer, removing the upper frame and the lower frame, and positioning the coordinate of the right end of the second character; the method specifically comprises the following steps:
s21, removing the area with the area smaller than 20 by using open operation;
s22, as shown in FIG. 2, counting the jumping times of each row of pixels from 1 to 0 or from 0 to 1 to obtain a jumping time statistical chart; searching upwards from 1/3 of the total row number to obtain a row coordinate x1 of which the jumping times are less than or equal to 13; searching downwards from 2/3 of the total row number to obtain row coordinate x2 with jump number less than or equal to 13, and removing the region outside the interval [ x1, x2 ]; obtaining the character height x (x2-x1), the width of a single character a x (45/90), the character spacing b x (12/90), the second and third character spacing c x (34/90);
s23, utilizing the characteristic that the distance between the second character and the third character is the largest, and positioning the coordinate z1 at the right end of the second character by using a vertical projection method.
S3, segmenting the characters and normalizing the characters; the method specifically comprises the following steps:
referring to fig. 3, offset amounts of 9 division points (y1 to y9) with respect to z1 are calculated using x, z1, a, b, c, and then coordinates of each division point are calculated by the offset amounts;
then, 7 characters are divided by 9 dividing points;
the segmented characters were normalized and the image size was transformed to 30 x 24.
S4, dividing the collected data set into a training set and a testing set, specifically, 9363 license plate character images in total in the data set, wherein 8430 license plate character images are obtained in the training set, and 933 license plate character images are obtained in the testing set; the training set proportion is 90%, and the test set proportion is 10%.
S5, constructing a convolutional neural network suitable for license plate character image recognition;
specifically, the convolutional neural network comprises 3 convolutional layers, 3 Batch normalization layers, 3 Relu activation layers, 3 maximum pooling layers, 1 Dropout layer, 2 full-link layers and 1 Softmax layer, wherein the input layer, the first convolutional layer, the first Batch normalization layer, the first Relu activation layer, the first maximum pooling layer, the second convolutional layer, the second Batch normalization layer, the second Relu activation layer, the second maximum pooling layer, the third convolutional layer, the third Batch normalization layer, the third Relu activation layer, the third maximum pooling layer, the first Dropout layer, the first full-link layer, the second full-link layer and the first Softmax layer are arranged in sequence.
As shown in fig. 4, the input layer data is a license plate character grayscale image with a size of 30 × 24; the convolution kernel size of the first convolution layer is 3 x 3, and the number of feature maps is 6; the sampling window size of the first maximum pooling layer is 2 x 2; the convolution kernel size of the second convolution layer is 3 x 3, and the number of feature maps is 16; the sampling window size of the second largest pooling layer is 2 x 2; the convolution kernel size of the third convolution layer is 3 x 3, and the number of feature maps is 128; the sampling window size of the third largest pooling layer is 2 x 2; the random deactivation probability of the first Dropout layer was 0.5; the number of neurons in the first fully-connected layer is 128; the number of neurons in the second fully-connected layer is 65.
S6, selecting training parameters, and training the set network by using a training set; the training parameters specifically include: model optimization algorithm, learning rate and iteration times. Wherein, the model optimization algorithm is a random gradient descent method with momentum; the learning rate is 0.01; the number of iterations is 40.
And S7, testing the trained network by using the test set to obtain the accuracy of the license plate recognition network.
Specifically, 9363 samples are obtained in the present embodiment, and through step S4, 8430 training sets and 933 testing sets are obtained; inputting 8430 training set samples into the convolutional neural network constructed in the step 5 for training, and identifying 933 test set samples by using the trained network to obtain the performance of the network.
Table one is the test results of the present invention:
number and letter Chinese characters All are
Error book 0 6 6
Total number of 430 503 933
Rate of accuracy 100% 98.81% 99.36%
Watch 1
The test results in the table one show that the embodiment has strong robustness for license plate character recognition, the Dropout layer is added to inhibit the overfitting phenomenon, the accuracy of license plate recognition is greatly improved, the final recognition rate reaches 99.36%, and the scheme provided by the embodiment has strong practical value.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that variations based on the shape and principle of the present invention should be covered within the scope of the present invention.

Claims (6)

1. A license plate character segmentation and recognition method is characterized by comprising the following steps:
s1, preprocessing a license plate image;
s2, removing the spacer, removing the upper frame and the lower frame, and positioning the coordinate of the right end of the second character;
s3, segmenting the characters and normalizing the characters;
s4, dividing the acquired data set into a training set and a testing set;
s5, constructing a convolutional neural network suitable for license plate character image recognition;
s6, selecting training parameters, and training the set network by using a training set;
and S7, testing the trained network by using the test set to obtain the accuracy of the license plate recognition network.
2. The method for segmenting and recognizing characters on a license plate according to claim 1, wherein the step S1 specifically includes:
s11, carrying out gray-scale operation on the color license plate image;
and S12, carrying out binarization operation on the gray-scale image by using the Otsu method.
3. The method for segmenting and recognizing characters on a license plate according to claim 1, wherein the step S2 specifically includes:
s21, removing the area with the area smaller than 20 by using open operation;
s22, counting the jumping times of each row of pixels from 1 to 0 or from 0 to 1 to obtain a jumping time statistical graph; searching upwards from 1/3 of the total row number to obtain a row coordinate x1 of which the jumping times are less than or equal to 13; searching downwards from 2/3 of the total row number to obtain row coordinate x2 with jump number less than or equal to 13, and removing the region outside the interval [ x1, x2 ]; obtaining the character height x (x2-x1), the width of a single character a x (45/90), the character spacing b x (12/90), the second and third character spacing c x (34/90);
s23, utilizing the characteristic that the distance between the second character and the third character is the largest, and positioning the coordinate z1 at the right end of the second character by using a vertical projection method.
4. The method for segmenting and recognizing characters on a license plate according to claim 3, wherein the step S3 specifically includes:
s31, calculating the offset of the 9 segmentation points y 1-y 9 relative to z1 by using x, z1, a, b and c, and then calculating the coordinate of each segmentation point through the offset;
s32, segmenting 7 characters by 9 segmenting points;
and S33, normalizing the segmented characters, and converting the image size into 30 x 24.
5. The method for segmenting and recognizing the license plate characters according to claim 1, wherein the convolutional neural network in the step S5 comprises a convolutional layer, a Batch normalization layer, a Relu activation layer, a max pooling layer, a Dropout layer, a full connection layer, and a Softmax layer; wherein, the receptive field of the convolutional layer is the adjacent neuron area of the previous layer and is used for extracting the characteristics of the image; the Batch normalization layer is used for enhancing the generalization capability of the network and improving the precision of the network model; the Relu activation layer enables the network model to approach any function; the maximum pooling layer is used for reducing the dimensionality of the data; the Dropout layer is used for inhibiting the overfitting phenomenon; the full connection layer is used for linear transformation; the Softmax layer may derive a multi-dimensional column vector for classification.
6. The license plate character segmentation and recognition method of claim 5, wherein the convolutional neural network has a specific structure: the device comprises an input layer, a first volume layer, a first Batch normalization layer, a first Relu activation layer, a first maximum pooling layer, a second volume layer, a second Batch normalization layer, a second Relu activation layer, a second maximum pooling layer, a third volume layer, a third Batch normalization layer, a third Relu activation layer, a third maximum pooling layer, a first Dropout layer, a first full-link layer, a second full-link layer and a first Softmax layer.
CN202010828399.8A 2020-08-17 2020-08-17 License plate character segmentation and recognition method Pending CN112101343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010828399.8A CN112101343A (en) 2020-08-17 2020-08-17 License plate character segmentation and recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010828399.8A CN112101343A (en) 2020-08-17 2020-08-17 License plate character segmentation and recognition method

Publications (1)

Publication Number Publication Date
CN112101343A true CN112101343A (en) 2020-12-18

Family

ID=73754494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010828399.8A Pending CN112101343A (en) 2020-08-17 2020-08-17 License plate character segmentation and recognition method

Country Status (1)

Country Link
CN (1) CN112101343A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343838A (en) * 2021-06-03 2021-09-03 安徽大学 Intelligent garbage identification method and device based on CNN neural network
CN115542362A (en) * 2022-12-01 2022-12-30 成都信息工程大学 High-precision space positioning method, system, equipment and medium for electric power operation site

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156704A (en) * 2014-08-04 2014-11-19 胡艳艳 Novel license plate identification method and system
CN106951896A (en) * 2017-02-22 2017-07-14 武汉黄丫智能科技发展有限公司 A kind of license plate image sloped correcting method
CN107273896A (en) * 2017-06-15 2017-10-20 浙江南自智能科技股份有限公司 A kind of car plate detection recognition methods based on image recognition
CN108319988A (en) * 2017-01-18 2018-07-24 华南理工大学 A kind of accelerated method of deep neural network for handwritten Kanji recognition
CN110276881A (en) * 2019-05-10 2019-09-24 广东工业大学 A kind of banknote serial number recognition methods based on convolution loop neural network
EP3550473A1 (en) * 2016-11-30 2019-10-09 Hangzhou Hikvision Digital Technology Co., Ltd. Character identification method and device
CN110543883A (en) * 2019-08-27 2019-12-06 河海大学 license plate recognition method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156704A (en) * 2014-08-04 2014-11-19 胡艳艳 Novel license plate identification method and system
EP3550473A1 (en) * 2016-11-30 2019-10-09 Hangzhou Hikvision Digital Technology Co., Ltd. Character identification method and device
CN108319988A (en) * 2017-01-18 2018-07-24 华南理工大学 A kind of accelerated method of deep neural network for handwritten Kanji recognition
CN106951896A (en) * 2017-02-22 2017-07-14 武汉黄丫智能科技发展有限公司 A kind of license plate image sloped correcting method
CN107273896A (en) * 2017-06-15 2017-10-20 浙江南自智能科技股份有限公司 A kind of car plate detection recognition methods based on image recognition
CN110276881A (en) * 2019-05-10 2019-09-24 广东工业大学 A kind of banknote serial number recognition methods based on convolution loop neural network
CN110543883A (en) * 2019-08-27 2019-12-06 河海大学 license plate recognition method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘峥强: "深度学习算法在车牌识别系统中的应用", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
李朝兵: "基于深度学习的车牌识别关键技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
段萌: "基于卷积神经网络的图像识别方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343838A (en) * 2021-06-03 2021-09-03 安徽大学 Intelligent garbage identification method and device based on CNN neural network
CN115542362A (en) * 2022-12-01 2022-12-30 成都信息工程大学 High-precision space positioning method, system, equipment and medium for electric power operation site

Similar Documents

Publication Publication Date Title
CN108710866B (en) Chinese character model training method, chinese character recognition method, device, equipment and medium
CN111666938B (en) Two-place double-license-plate detection and identification method and system based on deep learning
CN110363182B (en) Deep learning-based lane line detection method
Dehghan et al. Handwritten Farsi (Arabic) word recognition: a holistic approach using discrete HMM
CN101290659B (en) Hand-written recognition method based on assembled classifier
CN104598885B (en) The detection of word label and localization method in street view image
Tsai Recognizing handwritten Japanese characters using deep convolutional neural networks
CN108898138A (en) Scene text recognition methods based on deep learning
CN108664975B (en) Uyghur handwritten letter recognition method and system and electronic equipment
US5673337A (en) Character recognition
CN102496013A (en) Chinese character segmentation method for off-line handwritten Chinese character recognition
CN112101343A (en) License plate character segmentation and recognition method
Lacerda et al. Segmentation of connected handwritten digits using Self-Organizing Maps
CN103164701B (en) Handwritten Numeral Recognition Method and device
CN105760891A (en) Chinese character verification code recognition method
CN112307919B (en) Improved YOLOv 3-based digital information area identification method in document image
CN111738367B (en) Part classification method based on image recognition
CN111523622B (en) Method for simulating handwriting by mechanical arm based on characteristic image self-learning
CN110543883A (en) license plate recognition method based on deep learning
CN113159045A (en) Verification code identification method combining image preprocessing and convolutional neural network
CN111274915A (en) Depth local aggregation descriptor extraction method and system for finger vein image
CN112101237A (en) Histogram data extraction and conversion method
Li et al. A novel method of text line segmentation for historical document image of the uchen Tibetan
CN113361666A (en) Handwritten character recognition method, system and medium
CN111507356A (en) Segmentation method of handwritten characters of lower case money of financial bills

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201218

RJ01 Rejection of invention patent application after publication