CN107220655A - A kind of hand-written, printed text sorting technique based on deep learning - Google Patents
A kind of hand-written, printed text sorting technique based on deep learning Download PDFInfo
- Publication number
- CN107220655A CN107220655A CN201610168622.4A CN201610168622A CN107220655A CN 107220655 A CN107220655 A CN 107220655A CN 201610168622 A CN201610168622 A CN 201610168622A CN 107220655 A CN107220655 A CN 107220655A
- Authority
- CN
- China
- Prior art keywords
- picture
- printed text
- written
- hand
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Abstract
Invention describes a kind of hand-written, printed text sorting technique based on deep learning.Specifically include the following steps:(1) data acquisition:Hand-written, printed text image is gathered, to form training set;(2) binaryzation, height normalized are carried out to training set image;(3) extensive sample:Training set image is cut, adds processing of making an uproar;(4) the depth convolutional neural networks constructed are trained by construction depth convolutional neural networks using training set image;(5) textual image of classification is intended in cutting, and the depth convolutional neural networks that input step (4) is constructed, according to obtained probability distribution, are averaged to probability distribution, output category result.The present invention automatic different characteristic learnt between handwritten text and printed text from sample by deep learning algorithm, is differentiated in which can make computer intelligence to handwritten text, printed text image.
Description
Technical field
It is more particularly to a kind of hand-written and printed text the invention belongs to pattern-recognition and field of artificial intelligence
Sorting technique.
Background technology
With developing rapidly for computer technology, document analysis technology is also increasingly widely applied to paper document
Storage and the daily life such as retrieval in.Digital document is transitioned into textual image via initial plain text document
Mixing, printscript mixing, multilingual document mixing etc..
In actual life, the substantial amounts of hand-written and block letter mixing document being applied to.Hand-written and print in document
Brush body text all each plays due effect, and the detection, differentiation and processing to these different type texts are
Significantly.Particularly, the hand-written data in document often contains extra important information, therefore will
Handwritten text is distinguished, it helps follow-up more targetedly data processing and algorithm research.
Convolutional neural networks are one kind of artificial neural network, it has also become current speech is analyzed and field of image recognition
Study hotspot.Its weights share network structure and are allowed to be more closely similar to biological neural network, reduce network mould
The complexity of type, reduces the quantity of weights.The advantage is showed more when the input of network is multidimensional image
Substantially, image is allow directly as the input of network, it is to avoid complicated feature extraction in tional identification algorithm
And data reconstruction processes.Convolutional neural networks are a Multilayer Perceptions of the particular design for identification two-dimensional shapes
Device, deformation of this network structure to translation, proportional zoom, inclination or other forms has consistency.
Recently during the last ten years, the research work of artificial neural network particularly convolutional neural networks deepens continuously,
Through making great progress, it has successfully solved many modern times in fields such as speech analysis, image recognitions
The insoluble practical problem of computer, shows good intelligent characteristic.
The content of the invention
There is provided a kind of hand based on deep learning in order to overcome the technical problem present in prior art by the present invention
Write, the sorting technique of printed text, the sorting technique can effectively learn to distinguishing hand-written and printed text
Feature, so as to obtain more preferable classification performance, specific efficiency high, the characteristics of discrimination is high.
The present invention adopts the following technical scheme that to realize:It is a kind of hand-written, printed text point based on deep learning
Class method, comprises the following steps:(1) data acquisition:Hand-written, printed text image is gathered, to form instruction
Practice collection;(2) binaryzation, height normalized are carried out to training set image;(3) extensive sample:To instruction
Practice collection image to be cut, add processing of making an uproar;(4) construction depth convolutional neural networks, utilize training set image
The depth convolutional neural networks constructed are trained;(5) textual image of classification, input step are intended in cutting
Suddenly the depth convolutional neural networks that (4) are constructed, according to obtained probability distribution, average to probability distribution,
Output category result.
Preferably, the step (2) comprises the following steps:Training set image is converted into gray-scale map by (2-1);
Gray-scale map is highly normalized to H pixel by (2-2);Picture after (2-3) is normalized to height is carried out
Binaryzation.
Preferably, the binarization method is global average binaryzation:It is right using picture pixels average as threshold value
Picture after highly normalizing carries out binaryzation, and the pixel that will be greater than threshold value is entered as 255, will be less than threshold value
Pixel be entered as 0.
Preferably, the step (3) comprises the following steps:(3-1) presses step-length S, and binaryzation picture is cut
It is segmented into the picture that width is W;If picture width is less than W, picture width is amplified to W;(3-2) is passed through
Cross after step (3-1) processing, a binaryzation picture produces the N big small picture for W × H, and one is cut
Picture after cutting carries out adding processing of making an uproar to obtain M plus picture of making an uproar, common N × M plus picture of making an uproar, to expand sample
This space, H is the number of pixels after gray-scale map is highly normalized.
Preferably, the step (4) comprises the steps of:
(4-1) construction depth convolutional neural networks:
Input(96x32)->50C(7x3)S1->ReLU->MP2->80C(6x6)S1->ReLU->
MP2->500N->ReLU->Dropout(0.5)->2N->Softmax/Output(2x1)
Wherein, Input (96x32) represents that the picture size that input layer receives is 96x32 pixels;50C(7x3)S1
The convolutional layer to input picture progress feature extraction is represented, core size is 7x3, and step-length is 1, exports 50 spies
Levy figure;ReLU represents the linearity rectification active coating being modified to the feature that convolution is obtained;MP2 is represented to amendment
Feature afterwards carries out the maximum pond layer of maximum extraction, and core size is 2x2, and step-length is 2;500N is represented pair
The full articulamentum that the feature that preceding layer is obtained is learnt according to different weights, is output as 500 dimensional characteristics;
Dropout (0.5) is the random inhibition layer for preventing network from causing classification capacity to decline to training sample overlearning,
Suppression ratio is 50%;Softmax/Output (2x1) represents that output layer is Softmax layers, and output is defeated
Enter the probability distribution that picture is classified into handwritten text or printed text;
(4-2) utilizes training set image training depth convolutional neural networks:
(4-2-1) sets the number of pictures of batch training each time as BS, by step (3-1) and step
The pictures produced after being cut in (3-2) and M plus the picture of making an uproar produced after adding processing of making an uproar, altogether
M+1 pictures are considered as one group of pretreatment sample imgM+1;Every time to step (4-1) depth convolution god
When being trained through network, one is respectively randomly selected from BS groups pretreatment sample, a collection of training sample is constituted
imgBSCarry out batch training;
(4-2-2) is carried out using stochastic gradient descent method to step (4-1) the depth convolutional neural networks
Training, sets initial learning rate as lr0, initial learning rate is that neutral net is sought in training sample space
Look for the iterative rate of optimal solution;Learning parameter penalty coefficient is λ, and learning parameter penalty coefficient is to prevent nerve net
Parameter of the network to the overlearning of training set sample;Maximum training iterations is itersmax, maximum training iteration
The study iterations of progress needed for when number of times reaches requirement threshold value for neural network classification precision;Learning rate is more
New paragon is as follows:
Wherein, lr0Value is 0.01,0.003 or 0.005;λ values are 0.01,0.005,0.001;itersmax
Scope is 10000~15000;Iter is current iteration number of times;lriterFor current learning rate;γ scopes are 0.0003
To 0.0001;Stepsize scopes are 2000 to 3000.
Preferably, the step (5) comprises the steps of:
(5-1) is to any one picture img for intending classificationtest, cut, intercepted out using sliding window mode
Common NtestOpen the picture img of W × H sizessplit, sliding window size is W × H;
(5-2) is by NtestThe depth convolutional Neural networking of construction in image input step (4) is opened, N is obtainedtestGroup
It is classified as the probability distribution of handwritten text or printed text;By this NtestGroup probability distribution is averaged, with probability
The maximum classification of average judges classification output as final.
Compared with prior art, the present invention has advantages below and beneficial effect:
(1) due to the text feature learning algorithm using depth network structure, so can be good at from data
Learning is expressed to effective text feature, improves the accuracy rate of sorting technique of the present invention.
(2) compared with traditional text geometric properties, more appearance features can be extracted, are obtained preferably
Text feature is described, so as to obtain recognition effect more more preferable than traditional text geometric properties.
(3) sorting technique discrimination height of the present invention, strong robustness, efficiency high, speed are fast, can be effectively
Learn to the feature for distinguishing hand-written and printed text, so as to obtain more preferable classification performance.
Brief description of the drawings
Fig. 1 is the flow chart of sorting technique of the present invention;
Fig. 2 is pretreatment process figure of the invention;
Fig. 3 is the example of preprocessing process of the present invention;
Fig. 4 is depth convolutional neural networks structure chart of the invention;
Fig. 5 is Classification and Identification flow chart of the invention;
Fig. 6 is the example of Classification and Identification process of the present invention;
Embodiment
With reference to embodiment and accompanying drawing, the present invention is described further, but embodiments of the present invention are not limited
In this.
Embodiment
Hand-written, printed text the sorting technique of the present invention, FB(flow block) as shown in Figure 1, comprises the following steps:
(1) data acquisition:Hand-written, printed text image is gathered, to form text image training set;
Textual image can be generated (for example by file photographing, character library:Given birth to using Times New Roman fonts
Into English printed text picture) etc. mode obtain data, to be formed in text image training set, training set
Printed text picture and handwritten text picture respectively account for half.
(2) data prediction:Image binaryzation, picture altitude normalization;
Step (2) is comprised the steps of:
Printed text picture and handwritten text picture in training set is converted into gray-scale map by (2-1);
Gray scale picture height is normalized to 32 pixels by (2-2);
Picture after (2-3) is normalized to height carries out binaryzation.It is preferred that global average binaryzation:With picture
Pixel average carries out binaryzation as threshold value to image:The pixel that will be greater than threshold value is entered as 255 (i.e. in vain
Color), the pixel less than threshold value is entered as 0 (i.e. black).
(3) extensive sample:Training set image is cut, adds processing of making an uproar;
Step (2) and (3) form the pretreatment process of the present invention, as shown in Figure 2.Step (3) is specific
Comprise the steps of:
(3-1) presses the pixel of step-length 24, and binaryzation picture is cut into the picture that width is 96 pixels;If figure
Piece width is less than 96 pixels, then picture width is amplified into 96 pixels;
(3-2) after step (3-1) processing, a binaryzation picture produces 3 big small for 96x32
Picture;To one cut after picture carry out plus make an uproar processing (rotation processing, lines interference, noise disturb,
Gaussian Blur etc.) obtain 3 plus picture of making an uproar, common 3x3 plus picture of making an uproar, as shown in Figure 3.
(4) training network:Construction depth convolutional neural networks, are trained;
Step (4) comprises the following steps:
(4-1) constructs following depth convolutional neural networks (as shown in Figure 4):
Input(96x32)->50C(7x3)S1->ReLU->MP2->80C(6x6)S1->ReLU->
MP2->500N->ReLU->Dropout(0.5)->2N->Softmax/Output(2x1)
Wherein, Input (96x32) represents that the picture size that input layer receives is 96x32 pixels;50C(7x3)S1
The convolutional layer to input picture progress feature extraction is represented, core size is 7x3, and step-length is 1, exports 50 spies
Levy figure;ReLU represents the linearity rectification active coating being modified to the feature that convolution is obtained;MP2 is represented to amendment
Feature afterwards carries out the maximum pond layer of maximum extraction, and core size is 2x2, and step-length is 2;500N is represented pair
The full articulamentum that the feature that preceding layer is obtained is learnt according to different weights, is output as 500 dimensional characteristics;
Dropout (0.5) is the random inhibition layer for preventing network from causing classification capacity to decline to training sample overlearning,
Suppression ratio is 50%;Softmax/Output (2x1) represents that output layer is Softmax layers, and output is defeated
Enter the probability distribution that picture is classified into handwritten text or printed text;
(4-2) depth convolutional neural networks are trained, and step is as follows:
(4-2-1) sets the number of pictures of batch training each time as 100, by step (3-1) and step
The pictures produced after being cut in (3-2) are total to M produced after adding processing of making an uproar plus picture of making an uproar
M+1 pictures are considered as one group of pretreatment sample imgM+1;Neutral net designed by step (4-1) is each
One is respectively randomly selected when being trained from 100 groups of pretreatment samples, a collection of training sample img is constitutedBSEnter
Row batch is trained;
(4-2-2) is carried out using stochastic gradient descent method to step (4-1) the depth convolutional neural networks
Training, sets initial learning rate as lr0, initial learning rate is that neutral net is sought in training sample space
Look for the iterative rate of optimal solution;Learning parameter penalty coefficient is λ, and learning parameter penalty coefficient is to prevent nerve net
Parameter of the network to the overlearning of training set sample;Maximum training iterations is itersmax, maximum training iteration
The study iterations of progress needed for when number of times reaches requirement threshold value for neural network classification precision;Learning rate is more
New paragon is as follows:
Wherein, lr0Value is 0.01;λ values are 0.005;itersmaxValue is 10000;Iter is current
Iterations;lriterFor current learning rate;γ values are 0.0001;Stepsize values are 2500.
(5) textual image of cutting plan classification, the depth convolutional neural networks designed by input step (4),
According to obtained probability distribution, probability distribution is averaged, output category result.
Step (5) comprises the following steps (as shown in Figure 5,6):
(5-1) is cut using sliding window mode to a plan category images, intercepts out totally 4 96x32
(window size is 96x32 to the picture of size, and 24) step-length is;
The depth convolutional Neural networking that (5-2) designs 4 image input steps (4-1), obtains 4 groups of quilts
It is categorized as the probability distribution of handwritten text or printed text;This 4 groups of probability distribution are averaged, with mathematical expectation of probability
Maximum classification judges classification output as final.
In the example shown in Fig. 6, it is a handwritten text picture to intend the textual image of classification, using sliding window
After mouth mode is cut, 4 pictures are obtained;The depth volume designed by the picture input present invention after 4 are cut
Product neutral net, the convolution results to 4 pictures calculate printed text probability and handwritten text probability, asked respectively
Mean of a probability distribution is taken, the average of handwritten text probability is maximum, and output category result is handwritten text picture.
Embodiments of the present invention are simultaneously not restricted to the described embodiments, other any spirit without departing from the present invention
Essence with the change made under principle, modification, replacement, combine, simplification, should be equivalent substitute mode,
It is included within protection scope of the present invention.
Claims (10)
1. a kind of hand-written, printed text sorting technique based on deep learning, it is characterised in that including with
Lower step:
(1) data acquisition:Hand-written, printed text image is gathered, to form training set;
(2) binaryzation, normalized are carried out to training set image;
(3) extensive sample:Training set image is cut, adds processing of making an uproar;
(4) construction depth convolutional neural networks, using training set image to the depth convolutional Neural net that is constructed
Network is trained;
(5) textual image of cutting plan classification, the depth convolutional neural networks that input step (4) is constructed,
According to obtained probability distribution, probability distribution is averaged, output category result.
2. hand-written, printed text sorting technique according to claim 1, it is characterised in that described
Step (1) generates textual image mode to obtain data by file photographing, character library, the training set formed
Middle printed text picture and handwritten text picture respectively account for half.
3. hand-written, printed text sorting technique according to claim 1, it is characterised in that described
Step (2) comprises the following steps:
Training set image is converted into gray-scale map by (2-1);
Gray-scale map is normalized to H pixel by (2-2);
Picture after (2-3) is normalized to height carries out binaryzation.
4. hand-written, printed text sorting technique according to claim 3, it is characterised in that described
Binarization method is global average binaryzation:Using picture pixels average as threshold value, the figure after being normalized to height
Piece carries out binaryzation, and the pixel that will be greater than threshold value is entered as 255, the pixel less than threshold value is entered as into 0.
5. hand-written, printed text sorting technique according to claim 1, it is characterised in that described
Step (3) comprises the following steps:
(3-1) presses step-length S, and binaryzation picture is cut into the picture that width is W;If picture width is less than W,
Picture width is then amplified to W;
(3-2) after step (3-1) processing, a binaryzation picture produces N big small for W × H's
Picture, the picture after a cutting carries out adding processing of making an uproar to obtain M plus picture of making an uproar, common N × M plus figure of making an uproar
Piece, with enlarged sample space, H is the number of pixels after gray-scale map is highly normalized.
6. hand-written, printed text sorting technique according to claim 5, it is characterised in that described
Plus processing of making an uproar includes:Lines interference, noise interference, Gaussian Blur processing and rotation processing.
7. hand-written, printed text sorting technique according to claim 5, it is characterised in that the H
Scope is that 28 to 34, S scopes are 23 to 25 pixels, and W scopes are 92 to 100 pixels.
8. hand-written, printed text sorting technique according to claim 5, it is characterised in that described
Step (4) is comprised the steps of:
(4-1) construction depth convolutional neural networks:
Input(96x32)->50C(7x3)S1->ReLU->MP2->80C(6x6)S1->ReLU->
MP2->500N->ReLU->Dropout(0.5)->2N->Softmax/Output(2x1)
Wherein, Input (96x32) represents that the picture size that input layer receives is 96x32 pixels;50C(7x3)S1
The convolutional layer to input picture progress feature extraction is represented, core size is 7x3, and step-length is 1, exports 50 spies
Levy figure;ReLU represents the linearity rectification active coating being modified to the feature that convolution is obtained;MP2 is represented to amendment
Feature afterwards carries out the maximum pond layer of maximum extraction, and core size is 2x2, and step-length is 2;500N is represented pair
The full articulamentum that the feature that preceding layer is obtained is learnt according to different weights, is output as 500 dimensional characteristics;
Dropout (0.5) is the random inhibition layer for preventing network from causing classification capacity to decline to training sample overlearning,
Suppression ratio is 50%;Softmax/Output (2x1) represents that output layer is Softmax layers, and output is defeated
Enter the probability distribution that picture is classified into handwritten text or printed text;
(4-2) utilizes training set image training depth convolutional neural networks:
(4-2-1) sets the number of pictures of batch training each time as BS, by step (3-1) and step
The pictures produced after being cut in (3-2) and M plus the picture of making an uproar produced after adding processing of making an uproar, altogether
M+1 pictures are considered as one group of pretreatment sample imgM+1;Every time to step (4-1) depth convolution god
When being trained through network, one is respectively randomly selected from BS groups pretreatment sample, a collection of training sample is constituted
imgBSCarry out batch training;
(4-2-2) is carried out using stochastic gradient descent method to step (4-1) the depth convolutional neural networks
Training, sets initial learning rate as lr0, initial learning rate is that neutral net is sought in training sample space
Look for the iterative rate of optimal solution;Learning parameter penalty coefficient is λ, and learning parameter penalty coefficient is to prevent nerve net
Parameter of the network to training set sample overlearning;Maximum training iterations is itersmax, maximum training iteration time
The study iterations of progress needed for when number reaches requirement threshold value for neural network classification precision;Learning rate updates
Mode is as follows:
<mrow>
<msub>
<mi>lr</mi>
<mrow>
<mi>i</mi>
<mi>t</mi>
<mi>e</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>=</mo>
<msub>
<mi>lr</mi>
<mn>0</mn>
</msub>
<mo>&times;</mo>
<mrow>
<mo>(</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>1</mn>
<mo>+</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mi>&gamma;</mi>
<mo>&times;</mo>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mi>t</mi>
<mi>e</mi>
<mi>r</mi>
<mo>-</mo>
<mi>s</mi>
<mi>t</mi>
<mi>e</mi>
<mi>p</mi>
<mi>s</mi>
<mi>i</mi>
<mi>z</mi>
<mi>e</mi>
<mo>)</mo>
</mrow>
</mrow>
</msup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
Wherein, lr0Value is 0.01,0.003 or 0.005;λ values are 0.01,0.005,0.001;itersmax
Scope is 10000~15000;Iter is current iteration number of times;lriterFor current learning rate;γ scopes are 0.0003
To 0.0001;Stepsize scopes are 2000 to 3000.
9. hand-written, printed text sorting technique according to claim 5, it is characterised in that described
Step (5) is comprised the steps of:
(5-1) is to any one picture img for intending classificationtest, cut, intercepted out using sliding window mode
Common NtestOpen the picture img of W × H sizessplit, sliding window size is W × H;
(5-2) is by NtestThe depth convolutional Neural networking of construction in image input step (4) is opened, N is obtainedtestGroup
It is classified as the probability distribution of handwritten text or printed text;By this NtestGroup probability distribution is averaged, with probability
The maximum classification of average judges classification output as final.
10. hand-written, printed text sorting technique according to claim 9, it is characterised in that described
Sliding window size is 96x32.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610168622.4A CN107220655A (en) | 2016-03-22 | 2016-03-22 | A kind of hand-written, printed text sorting technique based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610168622.4A CN107220655A (en) | 2016-03-22 | 2016-03-22 | A kind of hand-written, printed text sorting technique based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107220655A true CN107220655A (en) | 2017-09-29 |
Family
ID=59928104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610168622.4A Pending CN107220655A (en) | 2016-03-22 | 2016-03-22 | A kind of hand-written, printed text sorting technique based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107220655A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108109124A (en) * | 2017-12-27 | 2018-06-01 | 北京诸葛找房信息技术有限公司 | Indefinite position picture watermark restorative procedure based on deep learning |
CN108364036A (en) * | 2017-12-28 | 2018-08-03 | 顺丰科技有限公司 | A kind of modeling method, recognition methods, device, storage medium and equipment |
CN108364037A (en) * | 2017-12-28 | 2018-08-03 | 顺丰科技有限公司 | Method, system and the equipment of Handwritten Chinese Character Recognition |
CN109493400A (en) * | 2018-09-18 | 2019-03-19 | 平安科技(深圳)有限公司 | Handwriting samples generation method, device, computer equipment and storage medium |
CN109858521A (en) * | 2018-12-29 | 2019-06-07 | 国际竹藤中心 | A kind of bamboo category identification method based on artificial intelligence deep learning |
CN110598691A (en) * | 2019-08-01 | 2019-12-20 | 广东工业大学 | Medicine character label identification method based on improved multilayer perceptron |
CN110991439A (en) * | 2019-12-09 | 2020-04-10 | 南京红松信息技术有限公司 | Method for extracting handwritten characters based on pixel-level multi-feature joint classification |
CN112862024A (en) * | 2021-04-28 | 2021-05-28 | 明品云(北京)数据科技有限公司 | Text recognition method and system |
CN112927254A (en) * | 2021-02-26 | 2021-06-08 | 华南理工大学 | Single word tombstone image binarization method, system, device and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1147652A (en) * | 1995-06-30 | 1997-04-16 | 财团法人工业技术研究院 | Construction method of data base for writing identifying system |
CN1438604A (en) * | 2002-12-23 | 2003-08-27 | 北京邮电大学 | Character written-form judgement apparatus and method based on Bayes classification device |
CN1538342A (en) * | 2003-02-19 | 2004-10-20 | ٿ��� | Process using several images for optical recognition of imail |
US20060124727A1 (en) * | 2004-12-10 | 2006-06-15 | Nikolay Kotovich | System and method for check fraud detection using signature validation |
US20070065003A1 (en) * | 2005-09-21 | 2007-03-22 | Lockheed Martin Corporation | Real-time recognition of mixed source text |
CN101414378A (en) * | 2008-11-24 | 2009-04-22 | 罗向阳 | Hidden blind detection method for image information with selective characteristic dimensionality |
CN101460960A (en) * | 2006-05-31 | 2009-06-17 | 微软公司 | Combiner for improving handwriting recognition |
CN102156876A (en) * | 2011-03-31 | 2011-08-17 | 华中科技大学 | Symbol identification method based on hexadecimal conversion |
CN102944418A (en) * | 2012-12-11 | 2013-02-27 | 东南大学 | Wind turbine generator group blade fault diagnosis method |
CN103996057A (en) * | 2014-06-12 | 2014-08-20 | 武汉科技大学 | Real-time handwritten digital recognition method based on multi-feature fusion |
US8990132B2 (en) * | 2010-01-19 | 2015-03-24 | James Ting-Ho Lo | Artificial neural networks based on a low-order model of biological neural networks |
CN104834941A (en) * | 2015-05-19 | 2015-08-12 | 重庆大学 | Offline handwriting recognition method of sparse autoencoder based on computer input |
CN105005975A (en) * | 2015-07-08 | 2015-10-28 | 南京信息工程大学 | Image de-noising method based on anisotropic diffusion of image entropy and PCNN |
CN105247540A (en) * | 2013-06-09 | 2016-01-13 | 苹果公司 | Managing real-time handwriting recognition |
-
2016
- 2016-03-22 CN CN201610168622.4A patent/CN107220655A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1147652A (en) * | 1995-06-30 | 1997-04-16 | 财团法人工业技术研究院 | Construction method of data base for writing identifying system |
CN1438604A (en) * | 2002-12-23 | 2003-08-27 | 北京邮电大学 | Character written-form judgement apparatus and method based on Bayes classification device |
CN1538342A (en) * | 2003-02-19 | 2004-10-20 | ٿ��� | Process using several images for optical recognition of imail |
US20060124727A1 (en) * | 2004-12-10 | 2006-06-15 | Nikolay Kotovich | System and method for check fraud detection using signature validation |
US20070065003A1 (en) * | 2005-09-21 | 2007-03-22 | Lockheed Martin Corporation | Real-time recognition of mixed source text |
CN101460960A (en) * | 2006-05-31 | 2009-06-17 | 微软公司 | Combiner for improving handwriting recognition |
CN101414378A (en) * | 2008-11-24 | 2009-04-22 | 罗向阳 | Hidden blind detection method for image information with selective characteristic dimensionality |
US8990132B2 (en) * | 2010-01-19 | 2015-03-24 | James Ting-Ho Lo | Artificial neural networks based on a low-order model of biological neural networks |
CN102156876A (en) * | 2011-03-31 | 2011-08-17 | 华中科技大学 | Symbol identification method based on hexadecimal conversion |
CN102944418A (en) * | 2012-12-11 | 2013-02-27 | 东南大学 | Wind turbine generator group blade fault diagnosis method |
CN105247540A (en) * | 2013-06-09 | 2016-01-13 | 苹果公司 | Managing real-time handwriting recognition |
CN103996057A (en) * | 2014-06-12 | 2014-08-20 | 武汉科技大学 | Real-time handwritten digital recognition method based on multi-feature fusion |
CN104834941A (en) * | 2015-05-19 | 2015-08-12 | 重庆大学 | Offline handwriting recognition method of sparse autoencoder based on computer input |
CN105005975A (en) * | 2015-07-08 | 2015-10-28 | 南京信息工程大学 | Image de-noising method based on anisotropic diffusion of image entropy and PCNN |
Non-Patent Citations (1)
Title |
---|
丁小刚: "BP神经网络与卷积神经网络在文字识别中的应用研究", 《万方数据知识服务平台》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108109124A (en) * | 2017-12-27 | 2018-06-01 | 北京诸葛找房信息技术有限公司 | Indefinite position picture watermark restorative procedure based on deep learning |
CN108364036A (en) * | 2017-12-28 | 2018-08-03 | 顺丰科技有限公司 | A kind of modeling method, recognition methods, device, storage medium and equipment |
CN108364037A (en) * | 2017-12-28 | 2018-08-03 | 顺丰科技有限公司 | Method, system and the equipment of Handwritten Chinese Character Recognition |
CN109493400A (en) * | 2018-09-18 | 2019-03-19 | 平安科技(深圳)有限公司 | Handwriting samples generation method, device, computer equipment and storage medium |
CN109493400B (en) * | 2018-09-18 | 2024-01-19 | 平安科技(深圳)有限公司 | Handwriting sample generation method, device, computer equipment and storage medium |
CN109858521A (en) * | 2018-12-29 | 2019-06-07 | 国际竹藤中心 | A kind of bamboo category identification method based on artificial intelligence deep learning |
CN110598691A (en) * | 2019-08-01 | 2019-12-20 | 广东工业大学 | Medicine character label identification method based on improved multilayer perceptron |
CN110598691B (en) * | 2019-08-01 | 2023-05-02 | 广东工业大学 | Drug character label identification method based on improved multilayer perceptron |
CN110991439A (en) * | 2019-12-09 | 2020-04-10 | 南京红松信息技术有限公司 | Method for extracting handwritten characters based on pixel-level multi-feature joint classification |
CN112927254A (en) * | 2021-02-26 | 2021-06-08 | 华南理工大学 | Single word tombstone image binarization method, system, device and storage medium |
CN112862024A (en) * | 2021-04-28 | 2021-05-28 | 明品云(北京)数据科技有限公司 | Text recognition method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107220655A (en) | A kind of hand-written, printed text sorting technique based on deep learning | |
CN109800754B (en) | Ancient font classification method based on convolutional neural network | |
CN107316307B (en) | Automatic segmentation method of traditional Chinese medicine tongue image based on deep convolutional neural network | |
CN112734775B (en) | Image labeling, image semantic segmentation and model training methods and devices | |
CN104834922B (en) | Gesture identification method based on hybrid neural networks | |
CN111126386B (en) | Sequence domain adaptation method based on countermeasure learning in scene text recognition | |
CN107844740A (en) | A kind of offline handwriting, printing Chinese character recognition methods and system | |
CN107220641B (en) | Multi-language text classification method based on deep learning | |
CN111652332B (en) | Deep learning handwritten Chinese character recognition method and system based on two classifications | |
Tsai | Recognizing handwritten Japanese characters using deep convolutional neural networks | |
Banumathi et al. | Handwritten Tamil character recognition using artificial neural networks | |
Srihari et al. | Role of automation in the examination of handwritten items | |
CN113128442A (en) | Chinese character calligraphy style identification method and scoring method based on convolutional neural network | |
CN109710804B (en) | Teaching video image knowledge point dimension reduction analysis method | |
CN112069900A (en) | Bill character recognition method and system based on convolutional neural network | |
CN107958219A (en) | Image scene classification method based on multi-model and Analysis On Multi-scale Features | |
CN110414513A (en) | Vision significance detection method based on semantically enhancement convolutional neural networks | |
Noor et al. | Handwritten bangla numeral recognition using ensembling of convolutional neural network | |
Song et al. | Occluded offline handwritten Chinese character inpainting via generative adversarial network and self-attention mechanism | |
Zhuang et al. | A handwritten Chinese character recognition based on convolutional neural network and median filtering | |
Mariyathas et al. | Sinhala handwritten character recognition using convolutional neural network | |
Gandhi et al. | An attempt to recognize handwritten Tamil character using Kohonen SOM | |
CN105809200A (en) | Biologically-inspired image meaning information autonomous extraction method and device | |
CN105844299B (en) | A kind of image classification method based on bag of words | |
Basha et al. | A novel approach for optical character recognition (OCR) of handwritten Telugu alphabets using convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170929 |
|
RJ01 | Rejection of invention patent application after publication |