CN104484684B - A kind of Manuscripted Characters Identification Method and system - Google Patents

A kind of Manuscripted Characters Identification Method and system Download PDF

Info

Publication number
CN104484684B
CN104484684B CN201510001954.9A CN201510001954A CN104484684B CN 104484684 B CN104484684 B CN 104484684B CN 201510001954 A CN201510001954 A CN 201510001954A CN 104484684 B CN104484684 B CN 104484684B
Authority
CN
China
Prior art keywords
sample
training sample
self
target
encoding encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510001954.9A
Other languages
Chinese (zh)
Other versions
CN104484684A (en
Inventor
张莉
鲁亚平
王邦军
杨季文
张召
李凡长
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ren Tuo Data Technology Shanghai Co ltd
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201510001954.9A priority Critical patent/CN104484684B/en
Publication of CN104484684A publication Critical patent/CN104484684A/en
Application granted granted Critical
Publication of CN104484684B publication Critical patent/CN104484684B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of Manuscripted Characters Identification Method and system, method is:Using with smooth norm L1Self-encoding encoder each training sample that training sample is concentrated is handled, obtain corresponding target training sample, the target training sample forms target training sample set, the smooth norm L of band with the sample label that the training sample is concentrated1Self-encoding encoder object function in be equipped with sparse penalty term, the sparse penalty term be smooth L1Then norm utilizes target training sample to train grader, object classifiers is obtained, using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain target sample to be predicted, target sample to be predicted be finally input to the object classifiers, with the classification of determination sample to be predicted.The scheme of the application is by smooth norm L1It is introduced into self-encoding encoder, the feature of more identification can be obtained as new sparse penalty term instead of common KL divergences so that final handwriting recongnition rate higher.

Description

A kind of Manuscripted Characters Identification Method and system
Technical field
This application involves mode identification technologies, more specifically to a kind of Manuscripted Characters Identification Method and system.
Background technology
The identification of handwriting digital (such as postal, bank and e-commerce field) in real life has more far-reaching Application demand.It always is the research hotspot of area of pattern recognition.In recent years, with computer technology and image procossing skill The rapid development of art, it has been proposed that much for realizing the method for Handwritten Digital Recognition, such as the calculation based on stroke feature Method, the algorithm based on k nearest neighbor, the algorithm based on support vector machines and the algorithm etc. based on neural network.But due to hand-written Number varies with each individual and changes very much, causes the recognition effect of all kinds of algorithms still not ideal enough.Therefore, research is efficient hand-written The identification of body number is still an important direction.
The method of artificial neural network provides a kind of be good for for approaching the object function of real number value, centrifugal pump or vector value The very strong method of strong property.Self-encoding encoder is a three-layer neural network, including input layer, hidden layer and output layer.Self-encoding encoder By the reconstructed error of minimum input data come the statistical framework inside acquistion input data, to obtain more discriminating power Feature.Andrew professors Ng of Stanford University in the object function of self-encoding encoder by adding KL divergence regularization terms The sparse coding to data is successfully realized punishing larger feature, and learns to have arrived good feature.But KL dissipates Degree is limited to the ability of data sparse coding, therefore there are still certain for the identification of handwriting digital for finally obtained feature Limitation.
Invention content
In view of this, this application provides a kind of Manuscripted Characters Identification Method and system, for solving existing handwriting recongnition The low problem of method recognition effect.
To achieve the goals above, it is proposed that scheme it is as follows:
A kind of Manuscripted Characters Identification Method, including:
Using with smooth norm L1Self-encoding encoder each training sample that training sample is concentrated is handled, obtain pair The target training sample answered, the target training sample form target training sample with the sample label that the training sample is concentrated Collection, the smooth norm L of band1Self-encoding encoder object function in be equipped with sparse penalty term, the sparse penalty term be smooth L1Model Number;
Grader is trained using the target training sample set, obtains object classifiers;
Using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain target sample to be predicted;
Target sample to be predicted is input to the object classifiers, with the classification of determination sample to be predicted.
Preferably, described using with smooth norm L1Self-encoding encoder each training sample that training sample is concentrated carry out Processing, obtains corresponding target training sample, including:
Defining training sample set is:
Wherein, y(i)It is and training sample x(i)Corresponding sample label, m are the numbers of training sample, and d is training sample dimension Degree;
Define self-encoding encoder hypothesis function be:
hW,b(x(i))
Wherein, W and b indicates weight and the biasing of self-encoding encoder respectively;
The output for defining j-th of hidden unit of i-th of training sample is expressed asAnd the number of hidden unit is n;
It determines with smooth norm L1The object function of self-encoding encoder be:
Wherein, first item is reconstruct item, and Section 2 is weight attenuation term, and λ is weight attenuation coefficient, and Section 3 is sparse punishes It is the weight of coefficient penalty factor to penalize item, β, and S () indicates smooth L1Norm, it is specific as follows:
Wherein, μ>0 is parameter preset;
Solve the parameter W so that the object function minimumoptAnd bopt
By WoptAnd boptIt brings into the hypothesis function of self-encoding encoder, obtains goal hypothesis function;
The training sample x that training sample is concentrated(i)It brings the goal hypothesis function into, obtains target training sample a(i)
Preferably, the parameter W so that the object function minimum is being solvedoptAnd boptWhen, using back-propagation algorithm into Row calculates.
Preferably, described using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain target and wait for Forecast sample, including:
It brings the sample to be predicted into the goal hypothesis function, obtains target sample to be predicted.
Preferably, the grader is Softmax graders.
A kind of handwriting recongnition system, including:
Training sample processing unit, for using with smooth norm L1Self-encoding encoder each instruction that training sample is concentrated Practice sample to be handled, obtains corresponding target training sample, the sample that the target training sample is concentrated with the training sample This label forms target training sample set, the smooth norm L of band1Self-encoding encoder object function in be equipped with sparse punishment , which is smooth L1Norm;
Classifier training unit obtains object classifiers for training grader using the target training sample set;
Sample to be tested processing unit, for using with smooth norm L1Self-encoding encoder treat forecast sample and handled, Obtain target sample to be predicted;
Classification determination unit is waited for pre- for target sample to be predicted to be input to the object classifiers with determination The classification of test sample sheet.
Preferably, the training sample processing unit includes:
Parameter definition unit is for defining training sample set:
Wherein, y(i)It is and training sample x(i)Corresponding sample label, m are the numbers of training sample, and d is training sample dimension Degree;
Define self-encoding encoder hypothesis function be:
hW,b(x(i))
Wherein, W and b indicates weight and the biasing of self-encoding encoder respectively;
The output for defining j-th of hidden unit of i-th of training sample is expressed asAnd the number of hidden unit is n;
Object function determination unit, for determining with smooth norm L1The object function of self-encoding encoder be:
Wherein, first item is reconstruct item, and Section 2 is weight attenuation term, and λ is weight attenuation coefficient, and Section 3 is sparse punishes It is the weight of coefficient penalty factor to penalize item, β, and S () indicates smooth L1Norm, it is specific as follows:
Wherein, μ>0 is parameter preset;
Object function solves unit, for solving the parameter W so that the object function minimumoptAnd bopt
Assuming that function determination unit, is used for WoptAnd boptIt brings into the hypothesis function of self-encoding encoder, obtains goal hypothesis Function;
Target training sample acquiring unit, the training sample x for concentrating training sample(i)Bring the goal hypothesis into Function obtains target training sample a(i)
Preferably, the parameter W so that the object function minimum is being solvedoptAnd boptWhen, using back-propagation algorithm into Row calculates.
Preferably, the sample to be tested processing unit includes:
First sample to be tested processing subelement is obtained for bringing the sample to be predicted into the goal hypothesis function Target sample to be predicted.
Preferably, the grader is Softmax graders.
It can be seen from the above technical scheme that Manuscripted Characters Identification Method provided by the embodiments of the present application, first with band Smooth norm L1Self-encoding encoder each training sample that training sample is concentrated is handled, obtain corresponding target training sample This, the target training sample forms target training sample set with the sample label that the training sample is concentrated, and the band is smooth Norm L1Self-encoding encoder object function in be equipped with sparse penalty term, the sparse penalty term be smooth L1Then norm utilizes mesh It marks training sample and trains grader, object classifiers are obtained, using with smooth norm L1Self-encoding encoder treat forecast sample into Row processing, obtains target sample to be predicted, target sample to be predicted is finally input to the object classifiers, with determination The classification of sample to be predicted.The scheme of the application is by smooth norm L1It is introduced into self-encoding encoder, instead of common KL divergences, as New sparse penalty term, can obtain the feature of more identification so that final handwriting recongnition rate higher.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of application for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of Manuscripted Characters Identification Method flow chart disclosed in the embodiment of the present application;
Fig. 2 is a kind of handwriting recongnition system structure diagram disclosed in the embodiment of the present application.
Specific implementation mode
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
The purport of the application is to be used to approach L by what Beck and Teboulle were proposed1The smooth function of norm is introduced into In self-encoding encoder, instead of common KL divergences, as new sparse penalty term, and by the feature with sparsity of acquistion, make Grader is trained for new sample.
Referring to Fig. 1, Fig. 1 is a kind of Manuscripted Characters Identification Method flow chart disclosed in the embodiment of the present application.
As shown in Figure 1, this method includes:
Step S100, using with smooth norm L1Self-encoding encoder each training sample that training sample is concentrated at Reason, obtains corresponding target training sample;
Specifically, the smooth norm L of the band1Self-encoding encoder object function in be equipped with sparse penalty term, this is sparse to punish It is smooth L to penalize item1Norm.The existing sparse penalty term of KL divergences is replaced in the present embodiment, substitution generation refers to smooth model Number L1
Wherein, the target training sample forms target training sample set with the sample label that the training sample is concentrated. That is, by being trained to former training sample, the sample label of the target training sample obtained after training as before, by Target training sample collectively forms target training sample set with sample label.
Step S110, grader is trained using the target training sample set, obtains object classifiers;
Step S120, using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain target wait for it is pre- Test sample sheet;
Specifically, it also needs to treat forecast sample and carries out identical processing, the target sample to be predicted that obtains that treated.
Step S130, target sample to be predicted is input to the object classifiers, with determination sample to be predicted Classification.
The classification namely its sample label of target sample to be predicted are predicted using object classifiers.
Manuscripted Characters Identification Method provided by the embodiments of the present application, first with smooth norm L1Self-encoding encoder to training Each training sample in sample set is handled, and obtains corresponding target training sample, the target training sample with it is described The sample label that training sample is concentrated forms target training sample set, the smooth norm L of band1Self-encoding encoder object function In be equipped with sparse penalty term, the sparse penalty term be smooth L1Then norm utilizes target training sample to train grader, obtains Object classifiers, using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain target sample to be predicted, Target sample to be predicted is finally input to the object classifiers, with the classification of determination sample to be predicted.The application's Scheme is by smooth norm L1It is introduced into self-encoding encoder, can be obtained more as new sparse penalty term instead of common KL divergences Has the feature of identification so that final handwriting recongnition rate higher.
Training sample set is handled next, we introduce, obtains the process of target training sample set.
First, defining training sample set is:
Wherein, y(i)It is and training sample x(i)Corresponding sample label, m are the numbers of training sample, and d is training sample dimension Degree.
The hypothesis function of self-encoding encoder is:hW,b(x(i))
Wherein, W and b indicates weight and the biasing of self-encoding encoder respectively.
The output of j-th of hidden unit of i-th of training sample is expressed asAnd the number of hidden unit is n.
It determines with smooth norm L1The object function of self-encoding encoder be:
Wherein, first item is reconstruct item, and Section 2 is weight attenuation term, and λ is weight attenuation coefficient, and Section 3 is sparse punishes It is the weight of coefficient penalty factor to penalize item, β, and S () indicates smooth L1Norm, it is specific as follows:
Wherein, μ>0 is parameter preset, and μ control S (x) approach norm L1Degree.
Then, the parameter W so that the object function minimum is solvedoptAnd bopt.Then by WoptAnd boptBring own coding into The hypothesis function h of deviceW,b(x(i)) in, obtain goal hypothesis function.Finally, training sample x training sample concentrated(i)It brings into The goal hypothesis function obtains target training sample a(i).Target training sample a(i)With training sample x(i)Sample label one Sample, it is thus determined that target training sample set is
It is to be understood that in the solution procedure for carrying out object function optimal solution, can be asked using back-propagation algorithm Solution.
Further, when the above-mentioned training grader using target training sample set, Softmax graders can be used.
It after training finishes grader, needs to treat forecast sample x processing, namely carries it into above-mentioned target vacation If in function, it may be determined that corresponding target sample a to be predicted.Then target sample a to be predicted is input to and has been trained Object classifiers in, to obtain the prediction classification of sample x.
In order to further confirm the superiority of the application method, now illustrated by a specific example.
We are for MNIST Handwritten Digital Recognitions.The data set shares 60000 training samples and 10000 Test sample.Each size is that 28*28 namely d values are 784.In this experiment, it is instructed using all training samples Practice with smooth L1The self-encoding encoder of norm, i.e. m=60000, and tested on entire test set.
Parameter setting procedure:
With smooth L1The number of the hidden unit of the self-encoding encoder of norm is 14 × 14, i.e. n=196, weight attenuation coefficient λ= 1e-3, weight beta=1 of sparsity penalty factor, sparsity parameter μ=0.9.In experiment, optimized using back-propagation algorithm Self-encoding encoder model and Softmax sorter models.
Prediction of result:
As comparison, on same training set and test set, we to without sparse item self-encoding encoder and with KL dissipate The self-encoding encoder of degree penalty term is trained and has been tested.Finally, k nearest neighbor algorithm is also used on same training set to surveying Examination collection is classified, and the recognition effect of experiment is shown in Table 1, and table 1 is the comparison of Handwritten Digit Classification performance (discrimination %).
K nearest neighbor Without sparse item The band sparse item of KL divergences The application
(97.08 K=3) 93.03 96.96 97.21
Table 1
Shown after introducing sparse item by upper table 1, the feature to more identification can be learnt, and the application is made Smooth L1Norm penalty term is more preferable than common KL divergences penalty term performance.
Handwriting recongnition system provided by the embodiments of the present application is described below, handwriting recongnition system described below System can correspond reference with above-described Manuscripted Characters Identification Method.
Referring to Fig. 2, Fig. 2 is a kind of handwriting recongnition system structure diagram disclosed in the embodiment of the present application.
As shown in Fig. 2, the system includes:
Training sample processing unit 21, for using with smooth norm L1Self-encoding encoder training sample is concentrated it is each Training sample is handled, and corresponding target training sample is obtained, and the target training sample is concentrated with the training sample Sample label forms target training sample set, the smooth norm L of band1Self-encoding encoder object function in be equipped with sparse punishment , which is smooth L1Norm;
Classifier training unit 22 obtains object classifiers for training grader using the target training sample set;
Specifically, grader used herein can be Softmax graders.
Sample to be tested processing unit 23, for using with smooth norm L1Self-encoding encoder treat at forecast sample Reason, obtains target sample to be predicted;
Classification determination unit 24 is waited for for target sample to be predicted to be input to the object classifiers with determination The classification of forecast sample.
Optionally, the training sample processing unit 21 may include:
Parameter definition unit is for defining training sample set:
Wherein, y(i)It is and training sample x(i)Corresponding sample label, m are the numbers of training sample, and d is training sample dimension Degree;
Define self-encoding encoder hypothesis function be:
hW,b(x(i))
Wherein, W and b indicates weight and the biasing of self-encoding encoder respectively;
The output for defining j-th of hidden unit of i-th of training sample is expressed asAnd the number of hidden unit is n;
Object function determination unit, for determining with smooth norm L1The object function of self-encoding encoder be:
Wherein, first item is reconstruct item, and Section 2 is weight attenuation term, and λ is weight attenuation coefficient, and Section 3 is sparse punishes It is the weight of coefficient penalty factor to penalize item, β, and S () indicates smooth L1Norm, it is specific as follows:
Wherein, μ>0 is parameter preset;
Object function solves unit, for solving the parameter W so that the object function minimumoptAnd bopt
Assuming that function determination unit, is used for WoptAnd boptIt brings into the hypothesis function of self-encoding encoder, obtains goal hypothesis Function;
Target training sample acquiring unit, the training sample x for concentrating training sample(i)Bring the goal hypothesis into Function obtains target training sample a(i)
Optionally, the sample to be tested processing unit 23 may include:
First sample to be tested processing subelement is obtained for bringing the sample to be predicted into the goal hypothesis function Target sample to be predicted.
Optionally, the parameter W so that the object function minimum is being solvedoptAnd boptWhen, it can be calculated using backpropagation Method is calculated.
Handwriting recongnition system provided by the embodiments of the present application, first with smooth norm L1Self-encoding encoder to training Each training sample in sample set is handled, and obtains corresponding target training sample, the target training sample with it is described The sample label that training sample is concentrated forms target training sample set, the smooth norm L of band1Self-encoding encoder object function In be equipped with sparse penalty term, the sparse penalty term be smooth L1Then norm utilizes target training sample to train grader, obtains Object classifiers, using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain target sample to be predicted, Target sample to be predicted is finally input to the object classifiers, with the classification of determination sample to be predicted.The application's Scheme is by smooth norm L1It is introduced into self-encoding encoder, can be obtained more as new sparse penalty term instead of common KL divergences Has the feature of identification so that final handwriting recongnition rate higher.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment including a series of elements includes not only that A little elements, but also include other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.
Each embodiment is described by the way of progressive in this specification, the highlights of each of the examples are with other The difference of embodiment, just to refer each other for identical similar portion between each embodiment.
The foregoing description of the disclosed embodiments enables professional and technical personnel in the field to realize or use the application. Various modifications to these embodiments will be apparent to those skilled in the art, as defined herein General Principle can in other embodiments be realized in the case where not departing from spirit herein or range.Therefore, the application It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest range caused.

Claims (8)

1. a kind of Manuscripted Characters Identification Method, which is characterized in that including:
Using with smooth norm L1Self-encoding encoder each training sample that training sample is concentrated is handled, obtain corresponding Target training sample, the target training sample form target training sample set with the sample label that the training sample is concentrated, The smooth norm L of band1Self-encoding encoder object function in be equipped with sparse penalty term, the sparse penalty term be smooth L1Norm;
Grader is trained using the target training sample set, obtains object classifiers;
Using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain target sample to be predicted;
Target sample to be predicted is input to the object classifiers, with the classification of determination sample to be predicted;
It is described to utilize with smooth norm L1Self-encoding encoder each training sample that training sample is concentrated is handled, obtain pair The target training sample answered, including:
Defining training sample set is:
Wherein, y(i)It is and training sample x(i)Corresponding sample label, m are the numbers of training sample, and d is training sample dimension;
Define self-encoding encoder hypothesis function be:
hW,b(x(i))
Wherein, W and b indicates weight and the biasing of self-encoding encoder respectively;
The output for defining j-th of hidden unit of i-th of training sample is expressed asAnd the number of hidden unit is n;
It determines with smooth norm L1The object function of self-encoding encoder be:
Wherein, first item is reconstruct item, and Section 2 is weight attenuation term, and λ is weight attenuation coefficient, and Section 3 is sparse punishment , β is the weight of coefficient penalty factor, and S () indicates smooth L1Norm, it is specific as follows:
Wherein, μ > 0 are parameter preset;
Solve the parameter W so that the object function minimumoptAnd bopt
By WoptAnd boptIt brings into the hypothesis function of self-encoding encoder, obtains goal hypothesis function;
The training sample x that training sample is concentrated(i)It brings the goal hypothesis function into, obtains target training sample a(i)
2. according to the method described in claim 1, it is characterized in that, solving the parameter W so that the object function minimumopt And boptWhen, it is calculated using back-propagation algorithm.
3. according to the method described in claim 1, it is characterized in that, described using with smooth norm L1Self-encoding encoder treat it is pre- Test sample is originally handled, and target sample to be predicted is obtained, including:
It brings the sample to be predicted into the goal hypothesis function, obtains target sample to be predicted.
4. according to the method described in claim 1, it is characterized in that, the grader is Softmax graders.
5. a kind of handwriting recongnition system, which is characterized in that including:
Training sample processing unit, for using with smooth norm L1Self-encoding encoder each trained sample that training sample is concentrated This is handled, and corresponding target training sample, the sample mark that the target training sample is concentrated with the training sample are obtained Label composition target training sample set, the smooth norm L of band1Self-encoding encoder object function in be equipped with sparse penalty term, should Sparse penalty term is smooth L1Norm;
Classifier training unit obtains object classifiers for training grader using the target training sample set;
Sample to be tested processing unit, for using with smooth norm L1Self-encoding encoder treat forecast sample and handled, obtain mesh Mark sample to be predicted;
Classification determination unit, for target sample to be predicted to be input to the object classifiers, with determination sample to be predicted This classification;
The training sample processing unit includes:
Parameter definition unit is for defining training sample set:
Wherein, y(i)It is and training sample x(i)Corresponding sample label, m are the numbers of training sample, and d is training sample dimension;
Define self-encoding encoder hypothesis function be:
hW,b(x(i))
Wherein, W and b indicates weight and the biasing of self-encoding encoder respectively;
The output for defining j-th of hidden unit of i-th of training sample is expressed asAnd the number of hidden unit is n;
Object function determination unit, for determining with smooth norm L1The object function of self-encoding encoder be:
Wherein, first item is reconstruct item, and Section 2 is weight attenuation term, and λ is weight attenuation coefficient, and Section 3 is sparse punishment , β is the weight of coefficient penalty factor, and S () indicates smooth L1Norm, it is specific as follows:
Wherein, μ > 0 are parameter preset;
Object function solves unit, for solving the parameter W so that the object function minimumoptAnd bopt
Assuming that function determination unit, is used for WoptAnd boptIt brings into the hypothesis function of self-encoding encoder, obtains goal hypothesis function;
Target training sample acquiring unit, the training sample x for concentrating training sample(i)Bring the goal hypothesis function into, Obtain target training sample a(i)
6. system according to claim 5, which is characterized in that solving the parameter W so that the object function minimumopt And boptWhen, it is calculated using back-propagation algorithm.
7. system according to claim 5, which is characterized in that the sample to be tested processing unit includes:
First sample to be tested processing subelement obtains target for bringing the sample to be predicted into the goal hypothesis function Sample to be predicted.
8. system according to claim 5, which is characterized in that the grader is Softmax graders.
CN201510001954.9A 2015-01-05 2015-01-05 A kind of Manuscripted Characters Identification Method and system Active CN104484684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510001954.9A CN104484684B (en) 2015-01-05 2015-01-05 A kind of Manuscripted Characters Identification Method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510001954.9A CN104484684B (en) 2015-01-05 2015-01-05 A kind of Manuscripted Characters Identification Method and system

Publications (2)

Publication Number Publication Date
CN104484684A CN104484684A (en) 2015-04-01
CN104484684B true CN104484684B (en) 2018-11-02

Family

ID=52759225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510001954.9A Active CN104484684B (en) 2015-01-05 2015-01-05 A kind of Manuscripted Characters Identification Method and system

Country Status (1)

Country Link
CN (1) CN104484684B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391996B (en) * 2017-08-02 2021-01-26 广东工业大学 Identity verification method and device based on L1 norm neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544392A (en) * 2013-10-23 2014-01-29 电子科技大学 Deep learning based medical gas identifying method
CN103778432A (en) * 2014-01-08 2014-05-07 南京邮电大学 Human being and vehicle classification method based on deep belief net
CN104077580A (en) * 2014-07-15 2014-10-01 中国科学院合肥物质科学研究院 Pest image automatic recognition method based on high-reliability network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544392A (en) * 2013-10-23 2014-01-29 电子科技大学 Deep learning based medical gas identifying method
CN103778432A (en) * 2014-01-08 2014-05-07 南京邮电大学 Human being and vehicle classification method based on deep belief net
CN104077580A (en) * 2014-07-15 2014-10-01 中国科学院合肥物质科学研究院 Pest image automatic recognition method based on high-reliability network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于稀疏自编码深度神经网络的林火图像分类";王勇 等;《计算机工程与应用》;20141215;174页左边栏倒数第1段-右边栏第1段,175页左边栏,176页左边栏倒数第11行-倒数第8行,176页右边栏第2段 *

Also Published As

Publication number Publication date
CN104484684A (en) 2015-04-01

Similar Documents

Publication Publication Date Title
Lin et al. Forecasting concentrations of air pollutants by logarithm support vector regression with immune algorithms
CN100380396C (en) Object detection apparatus, learning apparatus, object detection system, object detection method
CN103258214A (en) Remote sensing image classification method based on image block active learning
CN1307579C (en) Methods and apparatus for classifying text and for building a text classifier
CN105184298B (en) A kind of image classification method of quick local restriction low-rank coding
Anil et al. Convolutional neural networks for the recognition of Malayalam characters
CN110288030A (en) Image-recognizing method, device and equipment based on lightweight network model
CN108629367A (en) A method of clothes Attribute Recognition precision is enhanced based on depth network
CN104200224A (en) Valueless image removing method based on deep convolutional neural networks
CN103699523A (en) Product classification method and device
Katiyar et al. A hybrid recognition system for off-line handwritten characters
CN104834940A (en) Medical image inspection disease classification method based on support vector machine (SVM)
CN106778863A (en) The warehouse kinds of goods recognition methods of dictionary learning is differentiated based on Fisher
CN103426004B (en) Model recognizing method based on error correcting output codes
CN103020122A (en) Transfer learning method based on semi-supervised clustering
CN109635946A (en) A kind of combined depth neural network and the clustering method constrained in pairs
CN110069959A (en) A kind of method for detecting human face, device and user equipment
CN113159171B (en) Plant leaf image fine classification method based on counterstudy
CN110472417A (en) Malware operation code analysis method based on convolutional neural networks
CN106991049A (en) A kind of Software Defects Predict Methods and forecasting system
CN105976390A (en) Steel tube counting method by combining support vector machine threshold statistics and spot detection
Li et al. Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes
CN110414587A (en) Depth convolutional neural networks training method and system based on progressive learning
CN105893941B (en) A kind of facial expression recognizing method based on area image
Chen et al. A semisupervised deep learning framework for tropical cyclone intensity estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220310

Address after: 200000 Room 502, floor 5, No. 250, JIANGCHANG Third Road, Jing'an District, Shanghai

Patentee after: Ren Tuo data technology (Shanghai) Co.,Ltd.

Address before: 215123 No. 199 benevolence Road, Suzhou Industrial Park, Jiangsu, China

Patentee before: SOOCHOW University

TR01 Transfer of patent right