CN110533043B - SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition - Google Patents

SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition Download PDF

Info

Publication number
CN110533043B
CN110533043B CN201810502952.1A CN201810502952A CN110533043B CN 110533043 B CN110533043 B CN 110533043B CN 201810502952 A CN201810502952 A CN 201810502952A CN 110533043 B CN110533043 B CN 110533043B
Authority
CN
China
Prior art keywords
network
svd
parameter matrix
recurrent neural
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810502952.1A
Other languages
Chinese (zh)
Other versions
CN110533043A (en
Inventor
梁凯焕
杨亚锋
肖学锋
金连文
孙俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Fujitsu Research Development Centre Co Ltd
Original Assignee
South China University of Technology SCUT
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT, Fujitsu Ltd filed Critical South China University of Technology SCUT
Priority to CN201810502952.1A priority Critical patent/CN110533043B/en
Publication of CN110533043A publication Critical patent/CN110533043A/en
Application granted granted Critical
Publication of CN110533043B publication Critical patent/CN110533043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)

Abstract

The invention relates to a circulating neural network acceleration method for handwritten Chinese character recognition based on SVD, which comprises the following steps: s1: designing and training a recurrent neural network for online handwritten Chinese characters; s2: SVD decomposition is carried out on the parameter matrix, and the decomposed parameter matrix is calculated according to the multiple of required acceleration; s3: initializing the network according to the parameter matrix obtained by decomposition; s4: the whole network is retrained aiming at the online handwriting recognition task so as to achieve the fine adjustment effect; s5: optimizing forward implementation and dynamically setting the time node length of the forward process network. The invention adopts the cyclic neural network to recognize the online handwritten Chinese characters, and uses the SVD to decompose the trained network, thereby obviously reducing the computational complexity of the network, and simultaneously, dynamically setting the time node length of the network according to the input data in the forward process, accelerating the forward operation time of the network and ensuring the recognition precision of the network.

Description

SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition
Technical Field
The invention relates to the technical field of pattern recognition and artificial intelligence, in particular to a circular neural network acceleration method for handwritten Chinese character recognition based on SVD.
Background
The handwritten Chinese character recognition has the problems of inconsistent styles among writers, existence of similar characters and the like due to various Chinese character categories, and is always an important pattern recognition problem which is concerned and dedicated to be solved by academia. The handwritten Chinese characters are classified according to input data, and can be classified into offline handwritten Chinese characters and online handwritten Chinese characters. For offline handwritten Chinese characters, input data is a static image for storing character forms, and the method is mainly applied to the fields of bill, scene character, ancient book document identification and the like. For online handwritten Chinese characters, time sequence information of character strokes is also stored, and the method is mainly applied to mobile systems such as mobile phones and tablet computers.
In recent years, the appearance of a recurrent neural network, particularly the proposal of an LSTM layer, effectively utilizes the time sequence information of input data, and simultaneously avoids the complicated manual characteristic extraction step, so that the recognition performance of online handwritten Chinese characters is greatly improved. However, in general, the recurrent neural network has a large time calculation complexity and a long forward calculation time, and is difficult to be embedded into the mobile device. Therefore, it is important to accelerate the recurrent neural network on the premise of ensuring the recognition accuracy.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a circular neural network acceleration method for handwritten Chinese character recognition based on SVD, which can reduce the network calculation amount and ensure the network recognition rate.
The technical scheme of the invention is realized as follows:
a recurrent neural network acceleration method for handwritten Chinese character recognition based on SVD Singular Value Decomposition (Singular Value Decomposition) comprises the following steps:
s1: designing and training a recurrent neural network for online handwritten Chinese characters;
s2: SVD decomposition is carried out on the parameter matrix, and the decomposed parameter matrix is calculated according to the multiple of required acceleration;
s3: initializing the network according to the parameter matrix obtained by decomposition;
s4: the whole network is retrained aiming at the online handwriting recognition task so as to achieve the fine adjustment effect;
s5: optimizing forward implementation and dynamically setting the time node length of the forward process network.
Further, the specific steps of S1 are as follows:
s11: designing the structure of the recurrent neural network, setting parameters of an LSTM layer and a full connection layer, and selecting the length of a time node;
s12: and taking the data of the training set as the input of the recurrent neural network, training the recurrent neural network by adopting a self-adaptive gradient descent method, terminating the training when the error of the recurrent neural network on the training set is completely converged, and storing the network parameters.
Further, the specific steps of S2 are as follows:
s21: carrying out SVD on the parameter matrix of the LSTM layer, and calculating a reserved parameter matrix according to the required acceleration multiple;
s22: and carrying out SVD on the parameter matrix of the full connection layer, and calculating the reserved parameter matrix according to the required acceleration multiple.
Further, the specific steps of S3 are as follows:
s31: initializing the LSTM layer according to the parameter matrix calculated in the step S2;
s32: the full connection layer is initialized according to the parameter matrix calculated in step S2.
Further, in step S4, the network after SVD decomposition is retrained at a learning rate less than 10 times for the handwritten chinese character recognition task to achieve the fine tuning effect.
Further, the specific steps of S5 are as follows:
s51: calculating the time node length of each input character;
s52: and dynamically setting the time node length of the recurrent neural network in the forward process according to the time node length of the input character.
Compared with the prior art, the principle and the advantages of the scheme are as follows:
1. the method adopts the cyclic neural network to aim at the online handwritten character recognition, effectively utilizes the characteristic of time sequence among the strokes of online Chinese characters, and obviously improves the recognition rate of the network.
2. The acceleration method based on SVD is adopted to decompose the parameter matrixes of the LSTM layer and the fully-connected layer, so that the redundancy of the matrixes is greatly reduced, and the identification performance of the network can be still ensured under the condition of reducing the calculation complexity; in the forward process, the time node length of the recurrent neural network is dynamically set according to the input characters, so that the forward time is effectively accelerated.
Drawings
FIG. 1 is a flow chart of the acceleration method of the recurrent neural network for handwritten Chinese character recognition based on SVD of the present invention;
FIG. 2 is a schematic diagram of the SVD decomposition in step S2 according to the present invention;
fig. 3 is a schematic diagram of the LSTM layer in step S2 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention mainly solves the problem that the online handwritten Chinese character recognition speed based on the recurrent neural network is too slow, analyzes the calculation characteristics of an LSTM layer and a full connection layer, proposes a corresponding strategy, decomposes a parameter matrix into two smaller matrixes through SVD, modifies a realization code to a certain extent, and then calculates by multiplying the matrixes. The whole flow is shown in figure 1:
the embodiment of the invention comprises the following steps: s1: designing and training a recurrent neural network for online handwritten Chinese characters; s2: SVD decomposition is carried out on the parameter matrix, and the decomposed parameter matrix is calculated according to the multiple of required acceleration; s3: initializing the network according to the parameter matrix obtained by decomposition; s4: the whole network is retrained aiming at the online handwriting recognition task so as to achieve the fine adjustment effect; s5: optimizing forward implementation and dynamically setting the time node length of the forward process network. Specifically, a network is designed for training to obtain an initial model, then parameters reserved after SVD are calculated according to the size of each parameter matrix and the acceleration multiple, the network is initialized according to the parameters after SVD, retraining is performed on a handwriting recognition task, and finally forward implementation is optimized, the time node length of a forward process network is dynamically set, and forward calculation time is shortened.
The main steps of the embodiments of the present invention are described in detail below.
Step S1: designing and training a recurrent neural network for on-line handwritten Chinese characters, comprising the steps of
S11: designing the structure of the recurrent neural network, setting parameters of an LSTM layer and a full connection layer, and selecting the length of a time node;
in an embodiment of the present invention, each of the entered online handwritten kanji characters is preprocessed into a sequence of 150 coordinate points, each represented by a 6-dimensional vector. The set deep cycle neural network model structure comprises 2 LSTM layers in total, the output dimension of each LSTM layer is 100 and 512 dimensions respectively, and the time node length is 150; the device comprises a full connection layer, wherein the number of output neurons of the full connection layer is 512; it is worth noting that a ReLU is followed as an activation function at the fully connected layer. The last layer is an output layer, and 3755 types are output; the overall structure of the initial network is represented as:
Input--100LSTM--512LSTM--512FC--Output
s12: training the designed network;
and (3) carrying out classification during training, training the network by adopting a gradient descent method of self-adaptive momentum, wherein the training comprises two steps of forward propagation and backward propagation, the forward network propagates the error of the network, the backward propagation updates the parameters of each layer, and the parameters of the network are continuously optimized. During training, every ten thousand iterations, the data of the test set is tested completely by using the model, and finally the model with the highest result obtained during testing is kept.
Step S2: and carrying out SVD on the parameter matrix, and calculating the decomposed parameter matrix according to the multiple of the required acceleration. Comprises the steps of
S21: and carrying out SVD on the parameter matrix of the LSTM layer, and calculating the parameter matrix after decomposition according to the multiple of acceleration required.
A schematic diagram of the SVD decomposition is shown in FIG. 2, assuming an input vector I ∈ RmThe output vector O is equal to RnThe parameter matrix W is equal to Rm ×nAfter SVD decomposition of W, W ═ USV can be obtainedΤSorting the singular values from large to small, reserving the first r singular values and the corresponding singular vectors to obtain
Figure BDA0001670401930000051
Order to
Figure BDA0001670401930000052
Then W ≈ PQ. Thus, one parameter matrix can be decomposed by SVD into two smaller matrices, with the computational complexity reduced from O (mn) to O (r (m + n)).
A schematic diagram of the LSTM layer is shown in FIG. 3, which is formed by an Input Gate (i)t∈RNForget Gate (form Gate) ft∈RNAnd Output Gate (Output Gate) ot∈RNInput Modulation Gate gt∈RN4 doorsForming that the input of each time node comprises the input vector x of the current time nodet∈RCAnd hidden state h of last time pointt∈RNThe forward calculation formula is as follows:
it=σ(Wxixt+Whiht-1+bi) (1)
ft=σ(Wxfxt+Whfht-1+bf) (2)
ot=σ(Wxoxt+Whoht-1+bo) (3)
Figure BDA0001670401930000053
Figure BDA0001670401930000055
Figure BDA0001670401930000054
according to the above formula, input vector x is inputt4 matrices W for calculationxi,Wxf,Wxo,WxcViewed as a large matrix Wx∈RC×4NFor hidden state ht4 matrices W for calculationhi,Whf,Who,WhcViewed as a large matrix Wh∈RN×4N. The entire LSTM layer can thus be seen as comprising two parameter matrices Wx∈RC×4NAnd Wh∈RN×4NThe computational complexity of each time node is O (4 (N)2+ NC)). Respectively carrying out SVD on the two parameter matrixes, and reserving the first r singular values and singular vectors, then Wx≈PxQx,Wh≈PhQhEach parameter matrix is decomposed into two smaller matrices, and the total computational complexity is reduced to O (4(3N + C) r). Therefore, if one wants to accelerate LSTM by a factor of d, the value of r should be set to:
Figure BDA0001670401930000061
s22: and carrying out SVD on the parameter matrix of the full connection layer, and calculating the parameter matrix after decomposition according to the multiple of acceleration required.
Assuming that the input neuron of the full connection layer is x ∈ RMThe output neuron is f e RNThen, the calculation formula is:
f=Wfx+b, (8)
wherein the parameter matrix Wf∈RM×NThe bias b ∈ RNThe computational complexity is o (mn).
For parameter matrix WfSVD is carried out, the first r singular values and the corresponding singular vectors are reserved, and W can be obtainedf≈PfQfThe parameter matrix is decomposed into two smaller matrices with a computational complexity of O (r (M + N)). Therefore, if one wants to speed up the fully connected layer by a factor of d, the value of r should be set to:
Figure BDA0001670401930000062
step S3: and initializing the network according to the parameter matrix obtained by decomposition. Comprises the steps of
S31: the LSTM layer is initialized according to the parameter matrix calculated in step S2.
As shown in FIG. 2, SVD decomposition turns one parameter matrix into two smaller parameter matrices, so in an LSTM implementation, one matrix is multiplied by WxxtDecomposed into two matrix multiplications PxQxxt. And initialized according to the parameter matrix calculated in step S2. For Whht-1The implementation is modified and initialized as such.
S32: initializing the full connection layer according to the parameter matrix calculated in step S2.
For the fully-connected layer, its implementation is modified so that the first matrix multiplication Wfx is decomposed into two matrix multiplications PfQfx and is initialized according to the parameter matrix calculated in step S2.
Step S4: and (4) retraining the whole network aiming at the online handwriting recognition task so as to achieve the fine tuning effect.
After SVD, only the first r larger singular values and singular vectors are reserved, so that the parameter precision is lost, and the overall performance of the network is influenced to a certain extent. Therefore, the whole recognition network is subjected to fine tuning training at a low learning rate aiming at the handwriting recognition task, and the effect of improving the recognition accuracy rate is achieved.
Step S5: optimizing forward implementation and dynamically setting the time node length of the forward process network. Comprises the steps of
S51: calculating the time node length of each input character;
the time node length of each input character can be obtained by counting the raw input data.
S52: dynamically setting the time node length of a recurrent neural network in the forward process according to the time node length of the input character;
in the training process of the recurrent neural network, the time node length of the network is required to be fixed, so the time node length T of the network needs to be preset, and according to the LSTM forward calculation formula in the step S2, the total calculation complexity of the LSTM layer is O (T · 4 (N)2+ NC)) is linear with time node length. Suppose that the length of the time node of the ith character in the Chinese character set is TiThen, the time node length T of the network in the training process is determined by the following formula:
T=max<T1,T2,T3,...,TN-1,TN-2,TN> (10)
in the training process, for characters with the time node length smaller than T, the time node length reaches T by zero filling. Thus, for most charactersThe time node length of the training network is larger than that of the characters, and therefore the calculation complexity is greatly improved. In the forward implementation of the present invention, the time node length in the network forward process is dynamically set according to the time node length of each input character. Suppose that the time node length of the input character is TiThe time complexity of the recurrent neural network is reduced to O (T)i·4(N2+NC))(TiT is less than or equal to T). By using the dynamically changed time node length, the calculation complexity can be effectively reduced, the forward calculation time is accelerated, and the identification precision is not influenced.
The embodiment adopts the cyclic neural network to aim at the online handwritten character recognition, effectively utilizes the characteristic of time sequence among the strokes of the online Chinese characters, and obviously improves the recognition rate of the network; in addition, an acceleration method based on SVD decomposition is adopted to decompose the parameter matrixes of the LSTM layer and the full connection layer, so that the redundancy of the matrixes is greatly reduced, and the identification performance of the network can be still ensured under the condition of reducing the calculation complexity; in the forward process, the time node length of the recurrent neural network is dynamically set according to the input characters, so that the forward time is effectively accelerated.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that variations based on the shape and principle of the present invention should be covered within the scope of the present invention.

Claims (4)

1. The SVD-based acceleration method of the recurrent neural network for handwritten Chinese character recognition is characterized in that: the method comprises the following steps:
s1: designing and training a recurrent neural network for online handwritten Chinese characters;
s2: SVD decomposition is carried out on the parameter matrix, and the decomposed parameter matrix is calculated according to the multiple of required acceleration;
s3: initializing the network according to the parameter matrix obtained by decomposition;
s4: the whole network is retrained aiming at the online handwriting recognition task so as to achieve the fine adjustment effect;
s5, optimizing forward implementation, and dynamically setting the time node length of the forward process network;
the specific steps of S2 are as follows:
s21: carrying out SVD on the parameter matrix of the LSTM layer, and calculating a reserved parameter matrix according to the required acceleration multiple;
s22: carrying out SVD on the parameter matrix of the full connection layer, and calculating a reserved parameter matrix according to the required acceleration multiple;
the specific steps of S5 are as follows:
s51: calculating the time node length of each input character;
s52: and dynamically setting the time node length of the recurrent neural network in the forward process according to the time node length of the input character.
2. The SVD-based acceleration method for recurrent neural networks for handwritten Chinese character recognition of claim 1, wherein: the specific steps of S1 are as follows:
s11: designing the structure of the recurrent neural network, setting parameters of an LSTM layer and a full connection layer, and selecting the length of a time node;
s12: and taking the data of the training set as the input of the recurrent neural network, training the recurrent neural network by adopting a self-adaptive gradient descent method, terminating the training when the error of the recurrent neural network on the training set is completely converged, and storing the network parameters.
3. The SVD-based acceleration method for recurrent neural networks for handwritten Chinese character recognition of claim 1, wherein: the specific steps of S3 are as follows:
s31: initializing the LSTM layer according to the parameter matrix calculated in the step S2;
s32: the full connection layer is initialized according to the parameter matrix calculated in step S2.
4. The SVD-based acceleration method for recurrent neural networks for handwritten Chinese character recognition of claim 1, wherein: in step S4, the network after SVD decomposition is retrained at a learning rate less than 10 times for the handwritten chinese character recognition task to achieve the fine tuning effect.
CN201810502952.1A 2018-05-23 2018-05-23 SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition Active CN110533043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810502952.1A CN110533043B (en) 2018-05-23 2018-05-23 SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810502952.1A CN110533043B (en) 2018-05-23 2018-05-23 SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition

Publications (2)

Publication Number Publication Date
CN110533043A CN110533043A (en) 2019-12-03
CN110533043B true CN110533043B (en) 2022-04-08

Family

ID=68656946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810502952.1A Active CN110533043B (en) 2018-05-23 2018-05-23 SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition

Country Status (1)

Country Link
CN (1) CN110533043B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5245675A (en) * 1990-10-09 1993-09-14 Thomson-Csf Method for the recognition of objects in images and application thereof to the tracking of objects in sequences of images
CN103020657A (en) * 2012-12-28 2013-04-03 沈阳聚德视频技术有限公司 License plate Chinese character recognition method
CN106919942A (en) * 2017-01-18 2017-07-04 华南理工大学 For the acceleration compression method of the depth convolutional neural networks of handwritten Kanji recognition
CN107944555A (en) * 2017-12-07 2018-04-20 广州华多网络科技有限公司 Method, storage device and the terminal that neutral net is compressed and accelerated
CN108053027A (en) * 2017-12-18 2018-05-18 中山大学 A kind of method and device for accelerating deep neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102387378B1 (en) * 2014-10-07 2022-04-15 삼성전자주식회사 Method and apparatus for recognizing gait motion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5245675A (en) * 1990-10-09 1993-09-14 Thomson-Csf Method for the recognition of objects in images and application thereof to the tracking of objects in sequences of images
CN103020657A (en) * 2012-12-28 2013-04-03 沈阳聚德视频技术有限公司 License plate Chinese character recognition method
CN106919942A (en) * 2017-01-18 2017-07-04 华南理工大学 For the acceleration compression method of the depth convolutional neural networks of handwritten Kanji recognition
CN107944555A (en) * 2017-12-07 2018-04-20 广州华多网络科技有限公司 Method, storage device and the terminal that neutral net is compressed and accelerated
CN108053027A (en) * 2017-12-18 2018-05-18 中山大学 A kind of method and device for accelerating deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function;Shuai Li et al.;《Springer Science》;20120912;189-204 *
On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition;Rohit Prabhavalkar et al.;《2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)》;20160325;5970-5974 *
深度学习在手写汉字识别中的应用综述;金连文 等;《自动化学报》;20160831;第42卷(第08期);1125-1141 *

Also Published As

Publication number Publication date
CN110533043A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN106919942B (en) Accelerated compression method of deep convolution neural network for handwritten Chinese character recognition
CN107293288B (en) Acoustic model modeling method of residual long-short term memory recurrent neural network
CN108960301B (en) Ancient Yi-nationality character recognition method based on convolutional neural network
CN110334589B (en) High-time-sequence 3D neural network action identification method based on hole convolution
CN108875696A (en) The Off-line Handwritten Chinese Recognition method of convolutional neural networks is separated based on depth
CN108319988B (en) Acceleration method of deep neural network for handwritten Chinese character recognition
CN109919174A (en) A kind of character recognition method based on gate cascade attention mechanism
CN111291696A (en) Handwritten Dongba character recognition method based on convolutional neural network
CN111126602A (en) Cyclic neural network model compression method based on convolution kernel similarity pruning
CN107784316A (en) A kind of image-recognizing method, device, system and computing device
CN108985442B (en) Handwriting model training method, handwritten character recognition method, device, equipment and medium
AU2021100391A4 (en) Natural Scene Text Recognition Method Based on Sequence Transformation Correction and Attention Mechanism
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
CN115272774A (en) Sample attack resisting method and system based on improved self-adaptive differential evolution algorithm
Inunganbi et al. Handwritten Meitei Mayek recognition using three‐channel convolution neural network of gradients and gray
CN110895933B (en) Far-field speech recognition method based on space-time residual error neural network
CN110533043B (en) SVD-based acceleration method of recurrent neural network for handwritten Chinese character recognition
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
CN106952287A (en) A kind of video multi-target dividing method expressed based on low-rank sparse
Ali et al. High Accuracy Arabic Handwritten Characters Recognition Using Error Back Propagation Artificial Neural Networks
CN111552805B (en) Question and answer system question and sentence intention identification method
CN113920291A (en) Error correction method and device based on picture recognition result, electronic equipment and medium
US10909421B2 (en) Training method for phase image generator and training method of phase image classifier
Liu et al. Optimizing CNN using adaptive moment estimation for image recognition
CN112163514A (en) Method and device for identifying traditional Chinese characters and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240621

Address after: South China University of technology 381 Wushan Road Tianhe District Guangzhou City Guangdong Province

Patentee after: SOUTH CHINA University OF TECHNOLOGY

Country or region after: China

Patentee after: FUJITSU RESEARCH AND DEVELOPMENT CENTER Co.,Ltd.

Address before: 510640 South China University of technology, 381 Wushan Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: SOUTH CHINA University OF TECHNOLOGY

Country or region before: China

Patentee before: FUJITSU Ltd.

Country or region before: Japan