CN107292280A - A kind of seal automatic font identification method and identifying device - Google Patents

A kind of seal automatic font identification method and identifying device Download PDF

Info

Publication number
CN107292280A
CN107292280A CN201710536850.7A CN201710536850A CN107292280A CN 107292280 A CN107292280 A CN 107292280A CN 201710536850 A CN201710536850 A CN 201710536850A CN 107292280 A CN107292280 A CN 107292280A
Authority
CN
China
Prior art keywords
mrow
seal
image
font
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710536850.7A
Other languages
Chinese (zh)
Inventor
吕洪凤
盛冬冬
冯军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhenguan Prosperity (beijing) Technology Co Ltd
Original Assignee
Zhenguan Prosperity (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhenguan Prosperity (beijing) Technology Co Ltd filed Critical Zhenguan Prosperity (beijing) Technology Co Ltd
Priority to CN201710536850.7A priority Critical patent/CN107292280A/en
Publication of CN107292280A publication Critical patent/CN107292280A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The invention discloses a kind of seal automatic font identification method and identifying device, including:The stronger learning ability having using the multilayer convolutional neural networks of deep learning, by multilayer convolutional neural networks and multiple full articulamentums, realizes that the classification to seal font author judges, every time can Direct Recognition seal font true authors.Utilize the image expression of a large amount of true authors seal fonts, it can train with high speed and high-precision art work seal Character Font Recognition model, this method efficiently solves the seal automatic font identification problem under complex background, art work seal Character Font Recognition scene can be widely used in, such as:Favorites seal font differentiates, library collection differentiates, auction identifies etc..

Description

A kind of seal automatic font identification method and identifying device
Technical field
The present invention relates to machine learning, computer vision and area of pattern recognition, more particularly to art work seal font certainly Dynamic recognition methods and identifying device.
Background technology
The substantial amounts of art work all recognizes author with seal in society now, and traditional Imprint Recognition Method, mainly according to By the experience of people, recognition accuracy effect varies with each individual, and recognition efficiency is very low.It is badly in need of a kind of efficient standard under this background The method of true energy automatic identification seal font.
The content of the invention
It is an object of the invention to provide a kind of seal automatic font identification method and identifying device, to solve prior art The problem of middle seal Character Font Recognition efficiency is low, the degree of accuracy is difficult to ensure that.The present invention describes seal font sequence using image expression Row, train Matching Model, so that seal image is matched, to recognize that the font of the art work is truly made by depth convolutional neural networks Person.
Technical scheme is as follows:
A kind of seal automatic font identification method, methods described includes training process and identification process;
Wherein, the training process includes:
By the convolutional neural networks for having identified seal font image feeding structure of predefined size in database, by several layers Convolutional layer and full articulamentum, obtain multidimensional image expression corresponding with different authors, every one-dimensional in the multidimensional image expression The response of image expression represents the probability that the seal font image belongs to correspondence author, and by multidimensional image expression and truly Mark, which is compared, obtains predicated error;
Back-propagation algorithm and stochastic gradient descent method is used, to train the neutral net, to obtain base to reduce predicated error Matching Model is trained in the depth convolutional neural networks of deep learning;
The identification process includes:
The seal font image to be tested of predefined size is sent into the depth convolutional neural networks training Matching Model, Obtain multi-C vector output, the size of the response per dimension to should image belong to the general of author representated by this dimension Rate;
The author for choosing the maximum dimension representative of response is used as seal Character Font Recognition result.
Preferably, methods described also includes:Seal font image is normalized into same pixel size, predefined size is obtained Seal font image.
Preferably, the dimension of described image expression is identical with author's number that training process is related to.
Preferably, the convolutional neural networks each convolutional layer using m*m sizes convolution kernel with pre- fixed step size according to The image of below equation traverse scanning input, obtains characteristic pattern:
Wherein, g (i, j) is output characteristic figure, and f is input picture, and h (m, m) is convolution.
Preferably, the formula that the back-propagation algorithm is utilized is:
Wherein, E is the error that backpropagation is returned, and b is the base of each neuron, and δ is the sensitive of each neuron base b Degree, u is neural net layer.
Formula used in the gradient descent algorithm is:
wi=wi+Δwi
δ=(t-o) o (1-o)
Δwi=δ xi
Wherein, wiFor weights, multiple weights are expressed as matrix formT is the output of anticipation, and o represents sense Know the actual output of machine, xiThe output of i-th of neuron of neutral net last layer is represented, i represents the sequence number of this layer of neuron.
Correspondingly, the present invention also provides a kind of seal automatic font identification device, and the device includes trainer and identification Device;
Wherein, the trainer includes:
Unit, the seal of the identification font image of predefined size in database is sent into convolutional neural networks, warp by it Several layers of convolutional layer and full articulamentum are crossed, multidimensional image expression corresponding with different authors are obtained, in the multidimensional image expression Response per one-dimensional image expression represents the seal font image and belongs to the probability of correspondence author, and the multidimensional image is expressed It is compared with authentic signature and obtains predicated error;
Training unit, it uses back-propagation algorithm and stochastic gradient descent method to reduce predicated error to train the nerve Network, obtains the depth convolutional neural networks training Matching Model based on deep learning;
The identifying device includes:
Recognition unit, predefined size, to be tested seal font image is sent into the depth convolutional neural networks by it Matching Model is trained, multi-C vector output is obtained in the full articulamentum of last layer, the size of the response per dimension is to should Image belongs to the probability of the author representated by this dimension, and the recognition unit chooses author's work that the maximum dimension of response is represented For seal Character Font Recognition result.
Preferably, described device also includes:Normalization unit, it is big that seal font image is normalized to same pixel by it It is small, obtain normalized seal font image.
Methods and apparatus of the present invention can efficiently and accurately realize the automatic identification of art work seal font.
It will be appreciated by those skilled in the art that it is specific to be not limited to the above with the objects and advantages of the invention that realize It is described, and the above and other purpose that the present invention can be realized will be more clearly understood according to described further below.
Brief description of the drawings
Refer to the attached drawing, the present invention more purpose, function and advantages will be obtained by the described below of embodiment of the present invention To illustrate, in the accompanying drawings:
Fig. 1 is the art work seal automatic font identification algorithm block schematic illustration based on deep learning.
Fig. 2 is the art work automatic identifying method model schematic based on deep learning.
Fig. 3 is the block diagram of seal automatic font identification device in one embodiment of the invention.
Embodiment
Below, the preferred embodiment of the present invention is described in detail.The example of these preferred embodiments is in accompanying drawing In illustrated.What the embodiments of the present invention described shown in accompanying drawing and with reference to the accompanying drawings were merely exemplary, and this The technical spirit and its primary operational of invention are not limited to these embodiments.
Here, it should also be noted that, in order to avoid having obscured the present invention because of unnecessary details, in the accompanying drawings only Show and according to the solution of the present invention closely related structure and/or process step, and eliminate little with relation of the present invention Other details.
Inventor has found that depth learning technology has the Nonlinear Mapping of extremely strong independent learning ability and height, so that Automatic identification model for the complicated high-precision high-speed of design provides possibility.Therefore, it is to apply depth in the present invention Practise, the automatic identification of seal font is realized based on deep learning.The automatic identification of the seal font of the present invention is particularly suitable for use in The automatic identification of art work seal font.
The art work seal automatic font identification method based on deep learning of the present invention is instructed using depth learning technology Practice the iconic model of art work seal font, very high precision and speed is achieved in art work seal automatic font identification Degree.Illustrated below by taking certain Large-scale Artistic product seal font database as an example, the database comprising 1000 seal fonts and Corresponding 10 actual authors.
Fig. 1 is the flow chart of seal automatic font identification method of the present invention, and this method includes training process and identification process. As shown in figure 1, training process includes step S11-S13, identification process includes step S21-S22.In other words, the present invention is specific Comprise the following steps:
Step S11, carries out image normalization.
Qualification figure picture normalizes to identical picture to will be used to training in seal font database 1000 of different authors Plain size (being normalized to N*N pixels, such as 48*48 pixels), obtains the normalized image f in input picture, i.e. Fig. 1.Thus The seal font image of the different arts work of different authors is obtained.
Step S12, sends normalised seal font image into convolutional neural networks, obtains corresponding with different authors , represent to express the multidimensional image of the recognition result of author, and predicated error is obtained based on recognition result and legitimate reading.
By the convolutional neural networks of seal font image (such as size is 48*48) one multilayer (such as 6 layers) of feeding (as schemed Shown in 2), by several layers of convolutional layer and full articulamentum, image expression is obtained in last layer of full articulamentum, the image expression Dimension can be identical with author's number of seal font, and the response per one-dimensional image expression represents the image and belongs to corresponding author's Probability, the image expression and authentic signature are compared and obtain predicated error.Herein, authentic signature is mark seal fontmap The expression of the true authors of picture.
In the embodiment of the present invention, convolutional layer is used to carry out feature extraction, includes the whole multilayer of convolutional layer and full articulamentum Convolutional neural networks are equivalent to multi-layer perception (MLP).
In fig. 2, the convolutional neural networks for example comprising 3 layers of convolutional layer C1, C2, C3 and 3 layers of full articulamentum FCI, FC2, Soft-max, each of which layer convolutional layer can include convolution of the p size for m*m.Wherein, p represents some convolutional layer The number of convolution kernel, p value can freely be set, for convenience hardware concurrent optimize, generally p is set to 16 multiple, such as 32, 64th, 128 etc..For example first layer can use p=64.Every layer of convolution kernel number can be different in multiple convolutional layers.In addition, m*m is represented The size of convolution kernel, the size of convolution kernel can also freely be set, different and adjust according to task, for seal image, for example 5*5 or 3*3 be can use as the size of convolution kernel.Convolutional layer using the convolution kernel of m*m sizes with a fixed step size (be usually 1) according to The image of below equation traverse scanning input:
Wherein, g (i, j) is output characteristic figure, and f is input picture, and h (m, m) is that convolution is sub (0≤m≤N).With on image f A pixel (i, j) (0≤i≤N, 0≤j≤N) centered on, do convolution with the sub- h of convolution (m, m), obtain p characteristic pattern (spy The size for levying figure is (N-m+1) * (N-m+1)), space basic unit of office would generally be added after characteristic pattern (generally using max- The down-sampled layer of pooling maximums) it is used to further reduce characteristic dimension.
Similar convolution operation is carried out for each convolutional layer, and obtains corresponding feature output figure, then to obtain Feature output figure as input, carry out next layer of convolution operation.
For example, in the convolutional layer of last layer, also according to above-mentioned formula, make the output characteristic figure g (i, j) of last layer with Size does convolution for L*L (0≤L≤N-m+1) sample window, obtain k it is down-sampled after characteristic pattern w (i, j), now feature Figure w (i, j) size is (N-m+1)/2) * ((N-m+1)/2, wherein, k is identical with p implications above, represents the volume of convolutional layer The number of product core.Due to containing multiple convolutional layers, and every layer of convolution kernel number can be different, therefore in first convolutional layer, It may assume p convolution kernel;And in last convolutional layer, it will be assumed that there is k convolution kernel, herein with different parameter p and k only It is to distinguish the different time sequential routines.
Convolutional layer by using the convolution kernel of m*m sizes with the image of a fixed step size (be usually 1) traverse scanning input (such as N*N image), characteristic pattern is obtained, if p convolution kernel, then p characteristic patterns is obtained simultaneously, that is, is output as p* (N*N), is 3 The cell array of dimension, the output using last layer can carry out next round convolution operation as input.And full articulamentum can directly lead to A weight matrix is crossed, is row vector by p* (N*N) convolutional layer output transform, in embodiments of the present invention, full articulamentum is 1*M M dimension row vectors, wherein M can be adjusted voluntarily, if for example, full articulamentum dimension M is 1024, the output of full articulamentum The row vector of as 1*1024 dimensions.In embodiments of the present invention, the M values of 3 layers of full articulamentum for example can be respectively 1024,1024, 10。
Image f can obtain the output image F of D dimensions after 3 layers of convolutional layer and 3 layers of full articulamentum.In the present embodiment, D For 10.Usually require to add soft-max graders to place is normalized per one-dimensional output in last layer of full articulamentum Reason, so then corresponds to the probability that the image belongs to some author per one-dimensional response.Network output predict the outcome with truly The result of mark, which is compared, can obtain predicated error, and the error can be used for backpropagation to adjust network.
Step S13, uses back-propagation algorithm and stochastic gradient descent method to reduce predicated error to train the nerve net Network, obtains the training pattern of the art work seal automatic font identification method based on deep learning, i.e. depth convolutional neural networks Train Matching Model.
Back-propagation algorithm is a kind of supervised learning algorithm, for training perceptron, mainly propagates iterative cycles by weight Iteration, untill network reaches predetermined target zone to the response of output.Specific formula is as follows:
Wherein, E is the error that backpropagation is returned, and b is each neuron of each layer of the base, i.e. neutral net of each neuron Response.δ can regard each neuron base b sensitivity as, and u is neural net layer (if neutral net is generally by dried layer Composition, and every layer is made up of several neurons).For each layer, for the layer, each weights (is combined as square to error Battle array) derivative be that (δ of this layer of each neuron is combined into for the sensitivity of the input output of last layer (be equal to) and this layer of this layer One vectorial form) multiplication cross, for first layer, the input of first layer is the image of input, namely the last layer is input Image, its export be digital picture pixel value, typically RGB triple channels, real number of the scope in 0-255.Then, obtain Partial derivative is multiplied by the renewal of the weights for the neuron that a negative learning rate is exactly this layer.Wherein, learning rate is born to embody to error Correction, i.e. error increases, then the gradient calibration negative to one;Error is reduced, then the gradient calibration positive to one.Here it is by mistake Poor backpropagation principle.
Stochastic gradient descent method is that, come iteration renewal weights, specific formula is by each sample:
wi=wi+Δwi
δ=(t-o) o (1-o)
Δwi=δ xi
Wherein, wiFor weights, multiple weights are expressed as matrix formT is the output of anticipation, and o is represented The actual output of perceptron, xiRepresent the output (response of i.e. i-th neuron of i-th of neuron of neutral net last layer Value), i represents sequence number of this layer of neuron in this layer, such as the common k neuron of the layer, then i values are from 1 to k, respectively expression the 1st to K-th of neuron.
If sample size is very big (such as hundreds of thousands bar sample), then may be only with the sample of wherein tens of thousands of or thousands of This, just can iterate to optimal solution, and an iteration can not possibly be optimal, if being accomplished by traversal training if iteration 10 times Sample 10 times no longer declines until error, you can obtain the instruction of the art work seal automatic font identification method based on deep learning Practice Matching Model.
Matching Model is trained by depth convolutional neural networks, it is possible to which seal image is matched by following identification step Recognize the font true authors of the art work.
Step S21, art work seal font image to be tested is normalized to and training image identical pixel size (such as 48*48 pixels).
Step S22, the art work seal font based on deep learning that the image feeding to be tested is trained is automatic The training pattern of identification, i.e. depth convolutional neural networks train Matching Model, can obtain soft- in the full articulamentum of last layer Max D dimension outputs, D is author's number, and D is 10 in the present embodiment, i.e., 10 dimensions predicted in last layer of full articulamentum Image expression.
Image expression the size of the response of every dimension be to should image belong to author's representated by this dimension Probability, wherein response it is maximum regard as corresponding actual author.Choose author's conduct that the maximum dimension of response is represented Predict the outcome, this completes the prediction of an image correspondence author.
For example, the response of image expression is that Zhang Daqian is that 0.02, Qi Baishi is 0.12 in wherein one time test process ... The font for being 0.81, then can recognizing that this seal font is the king legendary ruler of great antiquity, the first of the Three August Ones of the king legendary ruler of great antiquity, the first of the Three August Ones.
The output that the present invention can be obtained from testing image responds to recognize true authors, according to training pattern repetition training Obtained result, output response it is maximum for output true authors.
It is used for the normalization of the seal font image of training in the above-mentioned font database to seal and to print to be tested The normalization of mirror font image can also be carried out independently of training process and identification process.
As above the automatic identification of art work seal font is just realized based on deep learning, the automatic identifying method is not only known Other speed is fast, efficiency high, and recognition accuracy is high.
Herein it should be noted that the method for the present invention not only goes for the automatic identification of art work seal font, The automatic identification of other seal fonts can also be applied to.
The present invention is performed as described above in example, is entered by taking the convolutional neural networks including 3 layers of convolutional layer and 3 layers of full articulamentum as an example Capable description, this structure is merely illustrative, and the present invention is not limited thereto, can also use with more or less convolutional layers and/or The neural network structure of full articulamentum realizes the seal automatic font identification of the present invention.
With the above method accordingly, present invention also offers a kind of seal automatic font identification device, the device includes Trainer 10 and identifying device 20.
Wherein, trainer includes unit 100 and training unit 200.Unit 100 will make a reservation in database The seal font image feeding convolutional neural networks of size (such as normalised), by several layers of convolutional layer and full articulamentum, Last layer of full articulamentum obtains multidimensional image expression corresponding with different authors, every one dimensional image in multidimensional image expression The response of expression represents the seal font image and belongs to the probability of correspondence author, and the multidimensional image is expressed and authentic signature It is compared and obtains predicated error.Training unit 200, which reduces prediction using back-propagation algorithm and stochastic gradient descent method, to be missed Difference obtains the depth convolutional neural networks training Matching Model based on deep learning to train the neutral net;
Identifying device may include recognition unit 300, and it is by (such as normalised) of predefined size seal word to be tested Body image feeding depth convolutional neural networks training Matching Model, obtains multidimensional (D dimensions) vector defeated in the full articulamentum of last layer Go out, the size of the response per dimension be to should image belong to the probability of the author representated by this dimension, the recognition unit The author for choosing the maximum dimension representative of response is used as seal Character Font Recognition result.
Preferably, described device may also include normalization unit (not shown), and it is used to normalize seal font image To same pixel size, normalized seal font image is obtained.The normalization unit can be integrated in respectively unit and , can also be independently of unit and recognition unit in recognition unit.
The each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.In above-mentioned embodiment In, the software or firmware that multiple steps or method can in memory and by suitable instruction execution system be performed with storage come Realize.If for example, being realized with hardware, with another embodiment, the known following technology in this area can be used Any one of or their combination realize:With the logic gates for realizing logic function to data-signal from Scattered logic circuit, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can be compiled Journey gate array (FPGA) etc..
The logic and/or step for representing or otherwise describing herein in flow charts, for example, being considered For the order list for the executable instruction for realizing logic function, it may be embodied in any computer-readable medium, with For instruction execution system, device or equipment (such as computer based system including the system of processor or other can be from instruction The system of execution system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or Equipment and use.
Described as described above for one embodiment and/or the feature that shows can be in same or similar mode at one or more Used in a number of other embodiments, and/or the feature in other embodiments is combined or substitutes with the feature in other embodiments Use.
With reference to the explanation of the invention disclosed here and practice, other embodiment of the invention is for those skilled in the art It all will be readily apparent and understand.Illustrate and embodiment is to be considered only as exemplary, of the invention true scope and purport is equal It is defined in the claims.

Claims (10)

1. a kind of seal automatic font identification method, it is characterised in that methods described includes training process and identification process;
Wherein, the training process includes:
By the convolutional neural networks for having identified seal font image feeding structure of predefined size in database, by several layers of convolution Layer and full articulamentum, obtain multidimensional image expression corresponding with different authors, every one dimensional image in the multidimensional image expression The response of expression represents the seal font image and belongs to the probability of correspondence author, and the multidimensional image is expressed and authentic signature It is compared and obtains predicated error;
Back-propagation algorithm and stochastic gradient descent method are used to reduce predicated error to train the neutral net, is obtained based on deep The depth convolutional neural networks training Matching Model of degree study;
The identification process includes:
The seal font image to be tested of predefined size is sent into the depth convolutional neural networks training Matching Model, obtained Multi-C vector is exported, the size of the response per dimension to should image belong to the probability of the author representated by this dimension;
The author for choosing the maximum dimension representative of response is used as seal Character Font Recognition result.
2. according to the method described in claim 1, it is characterised in that methods described also includes:
Seal font image is normalized into same pixel size, the seal font image of predefined size is obtained.
3. according to the method described in claim 1, it is characterised in that the work that the dimension of described image expression is related to training process Person's number is identical.
4. according to the method described in claim 1, it is characterised in that use m*m in each convolutional layer of the convolutional neural networks The image that the convolution kernel of size is inputted with pre- fixed step size according to below equation traverse scanning, obtains characteristic pattern:
<mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>h</mi> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>m</mi> <mo>)</mo> </mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow>
Wherein, g (i, j) is output characteristic figure, and f is input picture, and h (m, m) is convolution.
5. according to the method described in claim 1, it is characterised in that the formula that the back-propagation algorithm is utilized is:
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>E</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>b</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>E</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>u</mi> </mrow> </mfrac> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>u</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>b</mi> </mrow> </mfrac> <mo>=</mo> <mi>&amp;delta;</mi> <mo>;</mo> </mrow>
Wherein, E is the error that backpropagation is returned, and b is the base of each neuron, and δ is each neuron base b sensitivity, and u is Neural net layer.
6. method according to claim 5, it is characterised in that the formula used in the gradient descent algorithm is:
wi=wi+Δwi
δ=(t-o) o (1-o)
Δwi=δ xi
Wherein, wiFor weights, multiple weights are expressed as matrix formT is the output of anticipation, and o represents perceptron Actual output, xiThe output of neutral net i-th of neuron of last layer is represented, i represents the sequence number of this layer of neuron.
7. a kind of seal automatic font identification device, it is characterised in that the device includes trainer and identifying device;
Wherein, the trainer includes:
Unit, the seal of the identification font image of predefined size in database is sent into convolutional neural networks by it, by number Layer convolutional layer and full articulamentum, obtain multidimensional image expression corresponding with different authors, each in the multidimensional image expression The response of dimension image expression represents the probability that the seal font image belongs to correspondence author, and by multidimensional image expression and very Real mark, which is compared, obtains predicated error;
Training unit, it uses back-propagation algorithm and stochastic gradient descent method to reduce predicated error to train the nerve net Network, obtains the depth convolutional neural networks training Matching Model based on deep learning;
The identifying device includes:
Recognition unit, the seal font image to be tested of predefined size is sent into the depth convolutional neural networks training by it With model, obtain multi-C vector output, the size of the response per dimension to should image belong to work representated by this dimension The probability of person, the author that the recognition unit chooses the maximum dimension representative of response is used as seal Character Font Recognition result.
8. device according to claim 7, it is characterised in that described device also includes:
Normalization unit, seal font image is normalized to same pixel size by it, obtains normalized seal font image.
9. device according to claim 7, it is characterised in that
The unit the convolutional neural networks each convolutional layer using m*m sizes convolution kernel with pre- fixed step size according to The image of below equation traverse scanning input, obtains characteristic pattern:
<mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>h</mi> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>m</mi> <mo>)</mo> </mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow>
Wherein, g (i, j) is output characteristic figure, and f is input picture, and h (m, m) is convolution.
10. device according to claim 7, it is characterised in that the public affairs of the back-propagation algorithm used in the training unit Formula is:
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>E</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>b</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>E</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>u</mi> </mrow> </mfrac> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>u</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>b</mi> </mrow> </mfrac> <mo>=</mo> <mi>&amp;delta;</mi> <mo>;</mo> </mrow>
Wherein, E is the error that backpropagation is returned, and b is the base of each neuron, and δ is each neuron base b sensitivity, and u is Neural net layer;
The formula of stochastic gradient descent algorithm used in the training unit is:
wi=wi+Δwi
δ=(t-o) o (1-o)
Δwi=δ xi
Wherein, wiFor weights, multiple weights are expressed as matrix formT is the output of anticipation, and o represents perceptron Actual output, xiThe output of neutral net i-th of neuron of last layer is represented, i represents the sequence number of this layer of neuron.
CN201710536850.7A 2017-07-04 2017-07-04 A kind of seal automatic font identification method and identifying device Pending CN107292280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710536850.7A CN107292280A (en) 2017-07-04 2017-07-04 A kind of seal automatic font identification method and identifying device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710536850.7A CN107292280A (en) 2017-07-04 2017-07-04 A kind of seal automatic font identification method and identifying device

Publications (1)

Publication Number Publication Date
CN107292280A true CN107292280A (en) 2017-10-24

Family

ID=60098700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710536850.7A Pending CN107292280A (en) 2017-07-04 2017-07-04 A kind of seal automatic font identification method and identifying device

Country Status (1)

Country Link
CN (1) CN107292280A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308475A (en) * 2018-07-26 2019-02-05 北京百悟科技有限公司 A kind of character recognition method and device
CN111027345A (en) * 2018-10-09 2020-04-17 北京金山办公软件股份有限公司 Font identification method and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533517A (en) * 2009-04-15 2009-09-16 北京联合大学 Structure feature based on Chinese painting and calligraphy seal image automatic extracting method
CN103164699A (en) * 2013-04-09 2013-06-19 北京盛世融宝国际艺术品投资有限公司 Painting and calligraphy work fidelity identification system
CN104123657A (en) * 2014-07-23 2014-10-29 张国立 Filing authentication processing method for calligraphy and painting works
CN104732243A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR target identification method based on CNN
CN105893968A (en) * 2016-03-31 2016-08-24 华南理工大学 Text-independent end-to-end handwriting recognition method based on deep learning
CN106682671A (en) * 2016-12-29 2017-05-17 成都数联铭品科技有限公司 Image character recognition system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533517A (en) * 2009-04-15 2009-09-16 北京联合大学 Structure feature based on Chinese painting and calligraphy seal image automatic extracting method
CN103164699A (en) * 2013-04-09 2013-06-19 北京盛世融宝国际艺术品投资有限公司 Painting and calligraphy work fidelity identification system
CN104123657A (en) * 2014-07-23 2014-10-29 张国立 Filing authentication processing method for calligraphy and painting works
CN104732243A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR target identification method based on CNN
CN105893968A (en) * 2016-03-31 2016-08-24 华南理工大学 Text-independent end-to-end handwriting recognition method based on deep learning
CN106682671A (en) * 2016-12-29 2017-05-17 成都数联铭品科技有限公司 Image character recognition system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王强: "基于CNN的字符识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
阳哲: "卷积神经网络在印章编号识别中的应用", 《现代计算机》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308475A (en) * 2018-07-26 2019-02-05 北京百悟科技有限公司 A kind of character recognition method and device
CN111027345A (en) * 2018-10-09 2020-04-17 北京金山办公软件股份有限公司 Font identification method and apparatus

Similar Documents

Publication Publication Date Title
CN107909101A (en) Semi-supervised transfer learning character identifying method and system based on convolutional neural networks
US5048100A (en) Self organizing neural network method and system for general classification of patterns
CN108764173A (en) The hyperspectral image classification method of confrontation network is generated based on multiclass
CN106682569A (en) Fast traffic signboard recognition method based on convolution neural network
US20180144246A1 (en) Neural Network Classifier
CN104573729B (en) A kind of image classification method based on core principle component analysis network
CN108171318B (en) Convolution neural network integration method based on simulated annealing-Gaussian function
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN108596274A (en) Image classification method based on convolutional neural networks
CN108446766A (en) A kind of method of quick trained storehouse own coding deep neural network
CN114821217B (en) Image recognition method and device based on quantum classical hybrid neural network
CN111814804B (en) Human body three-dimensional size information prediction method and device based on GA-BP-MC neural network
CN108805061A (en) Hyperspectral image classification method based on local auto-adaptive discriminant analysis
Lin et al. Determination of the varieties of rice kernels based on machine vision and deep learning technology
CN112364974B (en) YOLOv3 algorithm based on activation function improvement
CN114332545A (en) Image data classification method and device based on low-bit pulse neural network
CN113011243A (en) Facial expression analysis method based on capsule network
CN109754357B (en) Image processing method, processing device and processing equipment
CN107292280A (en) A kind of seal automatic font identification method and identifying device
Amorim et al. Analysing rotation-invariance of a log-polar transformation in convolutional neural networks
CN109101984B (en) Image identification method and device based on convolutional neural network
Song et al. Using dual-channel CNN to classify hyperspectral image based on spatial-spectral information
Mardiyah et al. Developing deep learning architecture for image classification using convolutional neural network (CNN) algorithm in forest and field images
CN106997473A (en) A kind of image-recognizing method based on neutral net
CN114387524B (en) Image identification method and system for small sample learning based on multilevel second-order representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171024

RJ01 Rejection of invention patent application after publication