CN110807497A - Handwritten data classification method and system based on deep dynamic network - Google Patents

Handwritten data classification method and system based on deep dynamic network Download PDF

Info

Publication number
CN110807497A
CN110807497A CN201910960138.9A CN201910960138A CN110807497A CN 110807497 A CN110807497 A CN 110807497A CN 201910960138 A CN201910960138 A CN 201910960138A CN 110807497 A CN110807497 A CN 110807497A
Authority
CN
China
Prior art keywords
dynamic
layer
training
handwritten data
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910960138.9A
Other languages
Chinese (zh)
Inventor
王强
王吉华
张化祥
孙建德
牛奔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201910960138.9A priority Critical patent/CN110807497A/en
Publication of CN110807497A publication Critical patent/CN110807497A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries
    • G06V30/244Division of the character sequences into groups prior to recognition; Selection of dictionaries using graphical properties, e.g. alphabet type or font
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The present disclosure discloses a method and a system for classifying handwritten data based on a deep dynamic network, comprising: a training stage: constructing a deep dynamic network; acquiring an original training sample set containing a handwriting data sample and a corresponding handwriting category label; training the deep dynamic network by using an original training sample set to obtain a trained deep dynamic network; an application stage: and acquiring a handwritten data sample to be classified, inputting the handwritten data sample to be classified into the trained deep dynamic network, and outputting a recognition result of the handwritten data sample to be classified.

Description

Handwritten data classification method and system based on deep dynamic network
Technical Field
The present disclosure relates to the field of handwritten data classification technologies, and in particular, to a method and system for classifying handwritten data based on a deep dynamic network.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The classification problem is a basic problem of artificial intelligence, and the quality of classification performance has important influence and significance on other problems in the field of artificial intelligence. Currently, for the image field, classification models which are successful include an AlexNet model, a VGG model, a google lenet model, a ResNet model and the like, the classification accuracy of the models on an IMAGENET data set has already reached a high degree, but the models generally have the problems of numerous parameters (such as about 60M parameters of the AlexNet model, about 144M parameters of the VGG model and the like), and the number of parameters trained in almost all models needs to be measured by millions, so that the models are difficult to train, and the models lack certain interpretability and robustness.
In the course of implementing the present disclosure, the inventors found that the following technical problems exist in the prior art:
in the existing process of classifying handwriting data, a deep learning model is adopted, but the existing deep learning model has numerous parameters, too long training time and long occupation time of a computer memory; and the classification accuracy of the existing deep learning model is lower.
Disclosure of Invention
In order to solve the deficiencies of the prior art, the present disclosure provides a method and system for classifying handwritten data based on a deep dynamic network;
in a first aspect, the present disclosure provides a method for classifying handwritten data based on a deep dynamic network;
the handwritten data classification method based on the deep dynamic network comprises the following steps:
a training stage: constructing a deep dynamic network; acquiring an original training sample set containing a handwriting data sample and a corresponding handwriting category label; training the deep dynamic network by using an original training sample set to obtain a trained deep dynamic network;
an application stage: and acquiring a handwritten data sample to be classified, inputting the handwritten data sample to be classified into the trained deep dynamic network, and outputting a recognition result of the handwritten data sample to be classified.
In a second aspect, the present disclosure also provides a system for classifying handwritten data based on a deep dynamic network;
the handwritten data classification system based on the deep dynamic network comprises:
a training module:
a network construction unit configured to: constructing a deep dynamic network;
a first acquisition unit configured to: acquiring an original training sample set containing a handwriting data sample and a corresponding handwriting category label;
a training unit configured to: training the deep dynamic network by using an original training sample set to obtain a trained deep dynamic network;
an application module:
a second acquisition unit configured to: acquiring a handwritten data sample to be classified;
an identification unit configured to: and inputting the handwritten data sample to be classified into the trained deep dynamic network, and outputting the recognition result of the handwritten data sample to be classified.
In a third aspect, the present disclosure also provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first aspect.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first aspect.
Compared with the prior art, the beneficial effect of this disclosure is:
the invention aims to solve the problems that the deep learning model has too many parameters and the model lacks interpretability and robustness, so that the parameters of the training model are greatly reduced. The model provides a model framework for classifying data, improves training speed, improves classification accuracy and increases model stability. In addition, according to the characteristics of the data set to be trained, the depth of the model can realize the dynamic adjustment of the depth of the model.
Compared with the existing deep learning feature extraction method, the method models the convolution layer in deep learning into a dynamic module, and reduces the dimensionality of an output module to realize the reduction of the number of model parameters. In addition, the model can realize the self-adaptation of the depth learning model layer number.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flow chart of a method of the first embodiment;
FIG. 2 is a diagram illustrating residual dynamic modules of the first embodiment;
FIG. 3 is a diagram of a residual dynamical neural network according to a first embodiment;
FIG. 4 is a handwritten data set of a first embodiment;
fig. 5 is a schematic diagram of an embodiment of the residual dynamic network of the first embodiment.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Improving the interpretability of the deep learning model becomes a consensus of researchers in the industry, and reducing the number of model parameters is a very important research approach, so how to reduce or reduce the number of model parameters; let us take LeNet model as an example to look at links that may generate parameters.
The LeNet model has 7 layers, which are respectively: the convolutional layer, the pooling layer, the fully-connected layer and the fully-connected layer, and since the pooling layer does not generate parameters, model parameters come from the convolutional layer and the fully-connected layer, and the other models also come from the convolutional layer and the fully-connected layer. Therefore, in order to reduce the number of models, it is necessary to consider how to simplify the number of parameters of the convolutional layer and the fully-connected layer.
The idea of the invention is to analyze the existing deep learning model, reconstruct the deep model architecture, reduce the number of model parameters, improve the interpretability of the model, and provide a set of model architecture and model training method for the same. In addition, the model provided by the invention has a deep adaptive adjustment function according to the difficulty degree of a data set and a task.
In order to solve the problem that the model lacks interpretability due to too many parameters in the deep learning model, the invention reconstructs the traditional deep model architecture, analyzes the advantages and the disadvantages of the traditional model, and reduces the number of model parameters as much as possible while keeping the advantages of the model so as to improve the interpretability and the robustness of the model.
The embodiment I provides a handwritten data classification method based on a deep dynamic network;
as shown in fig. 1, the method for classifying handwritten data based on a deep dynamic network includes:
a training stage: constructing a deep dynamic network; acquiring an original training sample set containing a handwriting data sample and a corresponding handwriting category label; training the deep dynamic network by using an original training sample set to obtain a trained deep dynamic network;
an application stage: and acquiring a handwritten data sample to be classified, inputting the handwritten data sample to be classified into the trained deep dynamic network, and outputting a recognition result of the handwritten data sample to be classified.
As shown in fig. 3, as one or more embodiments, in the training phase, the specific steps of constructing the deep dynamic network include sequentially connected:
an input layer for inputting handwritten data samples;
the first residual dynamic module is used for extracting a first feature map from the handwritten data sample;
the first pooling layer is used for performing first pooling treatment on the first characteristic diagram;
the second residual dynamic module is used for extracting a second characteristic diagram from the characteristic diagram after the first pooling treatment;
the second pooling layer is used for performing second pooling treatment on the second characteristic diagram; and so on;
the pth residual dynamic module is used for extracting a pth characteristic diagram from the characteristic diagram after the pth-1 pooling treatment; p is a positive integer; and when the dimension output by the pth pooling layer is equal to the set classification category number or integral multiple of the set classification category number, the current value of p is the final value.
The p pooling layer is used for performing p pooling on the p characteristic graph;
the full connection layer is connected with the pth pooling layer;
and the Softmax classifier is used for outputting a final classification result.
As one or more embodiments, the first residual dynamic module, the second residual dynamic module, and the pth residual dynamic module are identical in structure.
As one or more embodiments, as shown in fig. 2, the first residual dynamic module includes:
the device comprises a standardization processing unit, a first dynamic model unit, a first Relu unit, a second dynamic model unit and a second Relu unit which are connected in sequence; the output end of the second dynamic model unit is also connected with the input end of the standardization processing unit. The second Relu unit is for connection with the pooling layer.
It should be understood that the number of dynamic model elements is two or three.
In the training stageSegment, training parameter A of each residual dynamic block using Monte Carlo algorithm(l)To increase the speed of training.
The working process of the standardization processing unit is as follows:
for sequence u1,u2,...unAnd (3) carrying out normalized transformation:
Figure BDA0002228642470000061
wherein the content of the first and second substances,
yirepresenting the amount of the ith pixel of a sample after normalized transformation; u. ofiAn ith pixel representing a sample;
Figure BDA0002228642470000063
represents the pixel average of a sample; s represents the standard deviation of one sample; n represents the total number of pixels of one sample.
The working process of the dynamic model unit is as follows:
x(l)(k+1)=A(l)x(l)(k)+y(l)(k+1), (1)
wherein, the matrix A(l)For the l-th layer dynamic model cell parameters, matrix A(l)Size 3X 3, y(l)Representing the input, x, of the l-th layer dynamic model element(l)Representing the state of the l-th layer dynamic model unit; matrix A(l)Is randomly generated and satisfies a characteristic root lambda (A)(l)) Located within the unit circle; x is the number of(l)(k +1) represents the state at the k +1 position of the l layer of the dynamic model unit; a. the(l)Representing the first layer dynamic structure matrix of the dynamic model unit; x is the number of(l)(k) Representing the state of the dynamic model unit at the k position of the l layer; y is(l)(k +1) represents a state at a k +1 position of the l layer of the normalized dynamic model unit; k denotes a shape of shaping the input data into a size of 3 × n, where k denotes the k-th column of the matrix after shaping.
The working process of the Relu unit is as follows:
Figure BDA0002228642470000071
wherein
Figure BDA0002228642470000072
Is x(l)(k) The ith component of (a).
The input layer cannot be divided by 3, and zero padding is performed after normalization.
The size of the pooling layer is 2 x2, and the pooling mode is maximum pooling or average pooling. If the output size before pooling cannot be divided by 2, zero padding operation is performed first, and then pooling treatment is performed.
If the size before pooling cannot be divided by 2, a decimal occurs after pooling, for example 5/2 is 2.5, the matrix size cannot be 2.5X2.5, and zero padding is performed.
As shown in fig. 4, the MNIST dataset is taken as an example to describe the model building and training process:
MNIST is a handwriting volume data set, wherein the size of each image is about 60000 images in the training set;
the method comprises the following steps: the image to be classified is first normalized, since 28 cannot be divided exactly by 3, and zero padding is then performed, so that the training data is 30 × 30 × 1, and then an output of size 15 × 15 × 1 is obtained by a residual module (a module that includes the effects of two dynamic models and a nonlinear activation layer) and a pooling module. Step two: the output data size after pooling is 15 × 15 × 1, then after a residual block again, the size is still 15 × 15 × 1, since 15 cannot be divided by 2, zero padding is performed on the lower right of the output, then an output with a size of 8 × 8 × 1 is obtained, and the categories of the categories are 10 categories, followed by full connectivity and softmax operations, as shown schematically in fig. 5.
Step three: and generating dynamic model parameters for the built depth model by using a Monte Carlo method, and then training the parameters of the full connection layer.
If the model comprises two residual error blocks, the number of the model parameters is one parameter, and if the model comprises three residual error modules, the number of the model parameters is one parameter which is far smaller than the number of the parameters of the current mainstream model. The deep learning model has numerous parameters, which brings certain troubles to the training, stability analysis and interpretability of the model. In order to overcome the defect, the interpretability and the robustness of a model are increased, and the training difficulty of the model is reduced, the invention discloses a classification model building and training method based on a deep dynamic network, wherein firstly, a residual dynamic module is built by a standardized operation, a dynamic model and a nonlinear unit, and then, pooling treatment is carried out; sequentially operating until the output dimension is 1 to k times of the classification category; then, establishing a full-connection network between the model output characteristics and the classification categories, and performing classification processing by using a softmax classifier; and finally, generating dynamic model parameters by using a Monte Carlo method, and then training the parameters of the full connection layer.
Optionally, the 8 × 8 × 1 output is subjected to zero padding and residual module, and pooled to obtain a 5 × 5 × 1 output.
The second embodiment also provides a handwritten data classification system based on the deep dynamic network;
in a second aspect, the present disclosure also provides a system for classifying handwritten data based on a deep dynamic network;
the handwritten data classification system based on the deep dynamic network comprises:
a training module:
a network construction unit configured to: constructing a deep dynamic network;
a first acquisition unit configured to: acquiring an original training sample set containing a handwriting data sample and a corresponding handwriting category label;
a training unit configured to: training the deep dynamic network by using an original training sample set to obtain a trained deep dynamic network;
an application module:
a second acquisition unit configured to: acquiring a handwritten data sample to be classified;
an identification unit configured to: and inputting the handwritten data sample to be classified into the trained deep dynamic network, and outputting the recognition result of the handwritten data sample to be classified.
In a third embodiment, the present embodiment further provides an electronic device, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, implement the steps of the method in the first embodiment.
In a fourth embodiment, the present embodiment further provides a computer-readable storage medium for storing computer instructions, and the computer instructions, when executed by a processor, perform the steps of the method in the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. The handwritten data classification method based on the deep dynamic network is characterized by comprising the following steps:
a training stage: constructing a deep dynamic network; acquiring an original training sample set containing a handwriting data sample and a corresponding handwriting category label; training the deep dynamic network by using an original training sample set to obtain a trained deep dynamic network;
an application stage: and acquiring a handwritten data sample to be classified, inputting the handwritten data sample to be classified into the trained deep dynamic network, and outputting a recognition result of the handwritten data sample to be classified.
2. The method of claim 1, wherein in the training phase, the specific step of constructing the deep dynamic network comprises the following steps connected in sequence:
an input layer for inputting handwritten data samples;
the first residual dynamic module is used for extracting a first feature map from the handwritten data sample;
the first pooling layer is used for performing first pooling treatment on the first characteristic diagram;
the second residual dynamic module is used for extracting a second characteristic diagram from the characteristic diagram after the first pooling treatment;
the second pooling layer is used for performing second pooling treatment on the second characteristic diagram; and so on;
the pth residual dynamic module is used for extracting a pth characteristic diagram from the characteristic diagram after the pth-1 pooling treatment; p is a positive integer; when the dimension output by the pth pooling layer is equal to the set classification category number or integral multiple of the set classification category number, the current value of p is the final value;
the p pooling layer is used for performing p pooling on the p characteristic graph;
the full connection layer is connected with the pth pooling layer;
and the Softmax classifier is used for outputting a final classification result.
3. The method of claim 1, wherein the first residual dynamic block, the second residual dynamic block, and the pth residual dynamic block are identical in structure.
4. The method of claim 3, wherein the first residual dynamic module comprises:
the device comprises a standardization processing unit, a first dynamic model unit, a first Relu unit, a second dynamic model unit and a second Relu unit which are connected in sequence; the output end of the second dynamic model unit is also connected with the input end of the standardization processing unit.
5. The method of claim 4, wherein in the training phase, parameter A of each residual dynamical block is trained using a Monte Carlo algorithm(l)To increase the speed of training.
6. The method of claim 4, wherein the standardized processing unit operates by:
for sequence u1,u2,...unAnd (3) carrying out normalized transformation:
Figure FDA0002228642460000021
wherein the content of the first and second substances,
Figure FDA0002228642460000022
yirepresenting the amount of the ith pixel of a sample after normalized transformation; u. ofiAn ith pixel representing a sample;represents the pixel average of a sample; s represents the standard deviation of one sample; n represents the total number of pixels of one sample.
7. The method of claim 4, wherein the dynamic model unit works by:
x(l)(k+1)=A(l)x(l)(k)+y(l)(k+1), (1)
wherein, the matrix A(l)For the l-th layer dynamic model cell parameters, matrix A(l)Size 3X 3, y(l)Representing the input, x, of the l-th layer dynamic model element(l)Representing the state of the l-th layer dynamic model unit; matrix A(l)Is randomly generated and satisfies a characteristic root lambda (A)(l)) Located within the unit circle; x is the number of(l)(k +1) represents the state at the k +1 position of the l layer of the dynamic model unit; a. the(l)Representing the first layer dynamic structure matrix of the dynamic model unit; x is the number of(l)(k) Representing the state of the dynamic model unit at the k position of the l layer; y is(l)(k +1) represents a state at a k +1 position of the l layer of the normalized dynamic model unit; k denotes a shape of shaping the input data into a size of 3 × n, where k denotes the k-th column of the matrix after shaping.
8. The handwritten data classification system based on the deep dynamic network comprises:
a training module:
a network construction unit configured to: constructing a deep dynamic network;
a first acquisition unit configured to: acquiring an original training sample set containing a handwriting data sample and a corresponding handwriting category label;
a training unit configured to: training the deep dynamic network by using an original training sample set to obtain a trained deep dynamic network;
an application module:
a second acquisition unit configured to: acquiring a handwritten data sample to be classified;
an identification unit configured to: and inputting the handwritten data sample to be classified into the trained deep dynamic network, and outputting the recognition result of the handwritten data sample to be classified.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions, when executed by the processor, performing the steps of the method of any of claims 1-7.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of any one of claims 1 to 7.
CN201910960138.9A 2019-10-10 2019-10-10 Handwritten data classification method and system based on deep dynamic network Pending CN110807497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910960138.9A CN110807497A (en) 2019-10-10 2019-10-10 Handwritten data classification method and system based on deep dynamic network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910960138.9A CN110807497A (en) 2019-10-10 2019-10-10 Handwritten data classification method and system based on deep dynamic network

Publications (1)

Publication Number Publication Date
CN110807497A true CN110807497A (en) 2020-02-18

Family

ID=69488122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910960138.9A Pending CN110807497A (en) 2019-10-10 2019-10-10 Handwritten data classification method and system based on deep dynamic network

Country Status (1)

Country Link
CN (1) CN110807497A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598596A (en) * 2020-12-25 2021-04-02 北京大学 Image rain removing method based on dynamic network routing and electronic device
CN112906829A (en) * 2021-04-13 2021-06-04 成都四方伟业软件股份有限公司 Digital recognition model construction method and device based on Mnist data set
CN113281997A (en) * 2021-04-14 2021-08-20 山东师范大学 Control method and system for cascade chemical reactor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153810A (en) * 2016-03-04 2017-09-12 中国矿业大学 A kind of Handwritten Numeral Recognition Method and system based on deep learning
CN107169566A (en) * 2017-06-09 2017-09-15 山东师范大学 Dynamic neural network model training method and device
CN107169504A (en) * 2017-03-30 2017-09-15 湖北工业大学 A kind of hand-written character recognition method based on extension Non-linear Kernel residual error network
CN109919203A (en) * 2019-02-19 2019-06-21 山东师范大学 A kind of data classification method and device based on Discrete Dynamic mechanism
CN110059677A (en) * 2019-04-15 2019-07-26 北京易达图灵科技有限公司 Digital table recognition methods and equipment based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153810A (en) * 2016-03-04 2017-09-12 中国矿业大学 A kind of Handwritten Numeral Recognition Method and system based on deep learning
CN107169504A (en) * 2017-03-30 2017-09-15 湖北工业大学 A kind of hand-written character recognition method based on extension Non-linear Kernel residual error network
CN107169566A (en) * 2017-06-09 2017-09-15 山东师范大学 Dynamic neural network model training method and device
CN109919203A (en) * 2019-02-19 2019-06-21 山东师范大学 A kind of data classification method and device based on Discrete Dynamic mechanism
CN110059677A (en) * 2019-04-15 2019-07-26 北京易达图灵科技有限公司 Digital table recognition methods and equipment based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TIE-QIANG WANG,ET AL.: "Radical-Based Chinese Character Recognition via Multi-Labeled Learning of Deep Residual Networks", 《2017 14TH IAPR INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION (ICDAR)》 *
赵朋成等: "基于高倍特征深度残差网络的手写数字识别", 《电子测量技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598596A (en) * 2020-12-25 2021-04-02 北京大学 Image rain removing method based on dynamic network routing and electronic device
CN112906829A (en) * 2021-04-13 2021-06-04 成都四方伟业软件股份有限公司 Digital recognition model construction method and device based on Mnist data set
CN113281997A (en) * 2021-04-14 2021-08-20 山东师范大学 Control method and system for cascade chemical reactor
CN113281997B (en) * 2021-04-14 2022-08-09 山东师范大学 Control method and system for cascade chemical reactor

Similar Documents

Publication Publication Date Title
CN108764471B (en) Neural network cross-layer pruning method based on feature redundancy analysis
CN109409222B (en) Multi-view facial expression recognition method based on mobile terminal
CN108764195B (en) Handwriting model training method, handwritten character recognition method, device, equipment and medium
CN109191382B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN107273936B (en) GAN image processing method and system
CN111160533B (en) Neural network acceleration method based on cross-resolution knowledge distillation
CN110807497A (en) Handwritten data classification method and system based on deep dynamic network
CN109086653B (en) Handwriting model training method, handwritten character recognition method, device, equipment and medium
CN112381097A (en) Scene semantic segmentation method based on deep learning
EP3620982B1 (en) Sample processing method and device
WO2022179533A1 (en) Quantum convolution operator
CN109002771B (en) Remote sensing image classification method based on recurrent neural network
CN111209964A (en) Model training method, metal fracture analysis method based on deep learning and application
Yue et al. Face recognition based on histogram equalization and convolution neural network
CN112766283A (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN113920516A (en) Calligraphy character skeleton matching method and system based on twin neural network
CN114882278A (en) Tire pattern classification method and device based on attention mechanism and transfer learning
CN115659823A (en) Wing profile aerodynamic coefficient prediction method based on attribute reduction, electronic device and storage medium
CN111368648A (en) Radar radiation source individual identification method and device, electronic equipment and storage medium thereof
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
CN110728352A (en) Large-scale image classification method based on deep convolutional neural network
Xia et al. Efficient synthesis of compact deep neural networks
CN115063847A (en) Training method and device for facial image acquisition model
CN109472319B (en) Three-dimensional model classification method and retrieval method
CN112884046A (en) Image classification method and device based on incomplete supervised learning and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200218

RJ01 Rejection of invention patent application after publication