CN110543888A - image classification method based on cluster recurrent neural network - Google Patents

image classification method based on cluster recurrent neural network Download PDF

Info

Publication number
CN110543888A
CN110543888A CN201910638362.6A CN201910638362A CN110543888A CN 110543888 A CN110543888 A CN 110543888A CN 201910638362 A CN201910638362 A CN 201910638362A CN 110543888 A CN110543888 A CN 110543888A
Authority
CN
China
Prior art keywords
neuron
cluster
layer
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910638362.6A
Other languages
Chinese (zh)
Other versions
CN110543888B (en
Inventor
程振波
沈正圆
张雷雷
林怀迪
高晶莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910638362.6A priority Critical patent/CN110543888B/en
Publication of CN110543888A publication Critical patent/CN110543888A/en
Application granted granted Critical
Publication of CN110543888B publication Critical patent/CN110543888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

A cluster recurrent neural network-based image classification method comprises the steps of dividing a data set to be identified into a training set and a test set according to labels, preprocessing the training set and the test set to form an input vector, constructing a cluster recurrent neural network based on a neuron cluster to form a feature vector with sparse expression, adjusting the weight between a recurrent layer and an output layer by using an algorithm based on reward signal modulation to form a mature classifier, and classifying images by using the classifier. The network structure and the learning method have the characteristics of simple structure, good classification result, certain universality and easy realization by hardware.

Description

Image classification method based on cluster recurrent neural network
Technical Field
the invention provides an image classification method based on a cluster recurrent neural network.
Technical Field
Many problems in the field of artificial intelligence ultimately require classification calculations, most typically image recognition. The traditional classification methods include a support vector machine, a decision tree and the like. These methods can be regarded as supervised learning, i.e. by inputting data and its corresponding labels, the optimal classification surface is found by solving a given optimization objective function. In addition, deep neural networks may also be used to implement data classification. The deep neural network usually adjusts parameters of the deep neural network by setting an optimization objective function and utilizing an error back propagation method so as to achieve the purpose of classifying by utilizing the network. However, the error back propagation method requires computation of complex function gradients and layer-by-layer transfer of the gradient values of the errors, and the complexity of this computation makes it difficult to implement the error back propagation method in the biological brain. Therefore, methods such as support vector machines and deep neural networks that implement classification by supervised learning are considered to be not feasible in physiological sense, that is, the biological brain may implement classification calculation by methods different from supervised learning.
a class of reinforcement learning methods that implement classification by reward signals is considered more physiologically feasible, and the main reasons for this include the following two aspects. The first reason is that reinforcement learning does not require classification results (labels corresponding to data) but only feedback signals (hereinafter referred to as rewards) that the classification results are correct or incorrect. The second reason is that experimental results of neurophysiology indicate that reward signals are closely related to mesencephalic dopamine neurons. Although reinforcement learning has been successful in solving complex decision-making tasks such as go, such reinforcement learning approaches tend to focus only on solving specific engineering problems, neglecting the feasibility in its physiological sense.
disclosure of Invention
The invention provides a cluster recurrent neural network and a method for realizing image classification by applying reward signals to modulate synapses of the neural network according to the latest research result of neuroscience. The structure of the cluster recurrent neural network comprises a plurality of clusters, each cluster is composed of neurons which are connected with each other in a recurrent mode, and the neurons in the clusters are calculated in a manner of exclusive sharing by a winner. In addition, expression modes of neurons among different clusters are combined through a learning method based on synapse modulation of reward signals, and therefore classification calculation of images is achieved. The network structure and the learning method not only have the characteristics of simple calculation and realization, but also have feasibility in physiological significance. The test result of the method in the handwritten number recognition data set shows that the classification method has certain universality and is easy to realize by hardware.
The invention constructs a cluster recurrent neural network based on neuron cluster, maps low-dimensional image data into a high-dimensional space, extracts image characteristics in the high-dimensional space in a manner of exclusive sharing by a winner, and learns the mapping from the image characteristics to image categories by reward signals so as to realize the recognition function.
the technical scheme adopted by the invention for solving the technical problems is as follows:
an image classification method based on a cluster recurrent neural network specifically comprises the following steps:
1. and preprocessing the image to be recognized to form an input vector.
2. an image feature extraction method based on a neuron cluster recurrent neural network. The method is characterized in that a high-dimensional recursion layer with a neuron cluster is constructed, and feature extraction is completed in the neuron cluster in a manner of exclusive sharing of a winner. The method comprises the following steps:
2.1. and taking an input vector after image preprocessing to be recognized as an input of a network, and recording as a one-dimensional column vector I containing m neurons, namely [ I1, I2,.. multidot.im ] T.
2.2. a recursive layer M is constructed comprising n neurons, and a neuron cluster is formed. All neurons were randomly divided into k equal-sized clusters, i.e. each cluster contained n/k neurons, and the neuron cluster was designated Clusteri, i-1, 2. Neurons within a neuron cluster are not connected to each other, while neurons within a neuron cluster are connected to each other.
2.3. a connection matrix W between the input layer and the recursive layer M is determined. The input neuron and the recurrent neuron are connected with a probability p of 0.1, namely the probability of 90% of the connections between the input neuron and the recurrent neuron is 0, and the non-0 value is randomly assigned according to a standard Gaussian distribution (the mean value is 0 and the variance is 1).
And 2.4 calculating the activity of each neuron in the neuron cluster. The neuron in the cluster determines whether the neuron fires according to the input value of the neuron, the neuron with the largest input value of the neuron in the cluster is activated, and other neurons in the cluster are in a resting state. The neuron in the activated state has an output value of 1, and the neuron in the resting state has an output value of 0. The input value Y of each neuron in the recursion layer M is:
Y=WI
(1)
3. Image features are mapped to image classes based on reward signal learning, forming a mature classifier. The method comprises the following steps:
3.1 define the output layer representing the category. If the processed image has a total of l different classes, the output layer has l neurons, whose output is denoted Z ═ Z1, Z2, …, zl ] T. And (3) connecting the recursive layer M to the output layer by a matrix H, and randomly assigning values to each element in the matrix according to uniform distribution between [0 and 1 ]. Thus, the output is:
Z=HY
(2)
only one neuron in the output layer is active, which corresponds to a specific class. In order to calculate the output value of each neuron of the output layer, each element in the column vector Z of the output layer is firstly calculated as follows:
Thus, a column vector P ═ P1, P2.., pl ] T is obtained, and then an index of the maximum value in P is taken to obtain K ═ argmax (P), where K is the category number of the input image.
3.2 based on reward signal modulation recursion layer M to output layer between the matrix H, complete the characteristic to the learning of classification, including the following steps:
3.2.1. calculating the reward value R according to the category label T (if l is in the category, T belongs to [1, l ]) and the prediction result K of the neural network, namely:
3.2.2. Adjusting the weight of a connection matrix H between the output layer and the recursion layer according to the column vector P obtained by calculation in the step 3.1):
H[K,:]=H[K,:]+η*(R-P)*M
(5)
Where η is the learning rate, MT is the transpose of the recursive layer column vector, and t is the number of iterations. Ht [ K,: represents the kth row vector of the connection matrix H between the recursive layer and the output layer at t iterations.
3.2.3. The connection matrix H is normalized. And if the element value in the matrix is larger than the threshold value c, the connection is considered to exist and should be assigned to be 1, otherwise, the connection is considered to not exist and should be assigned to be 0. This results in an updated connection matrix H:
and 3.2.4, stopping until the iteration times reach a preset target, and storing the connection matrix of the network to form a mature classifier.
4. The classification of the images is achieved based on a maturity classifier.
4.1 classifying the images the same preprocessing as in step 1 and forming the input vectors.
4.2 the features of the input vector will be extracted as in equation (1). The characteristic extraction by adopting the recursive cluster neural network not only completes the mapping of the characteristic from low dimensionality to high dimensionality, but also ensures that the characteristic has sparsity due to an activity calculation mode exclusively shared by winners in the recursive network.
4.3 calculating the output value of the feature vector according to the formula (2), wherein the connection matrix H stored after forming the mature classifier as mentioned in the step 3.2.3 is adopted in the formula (2).
4.4 obtaining the classification result of the image according to the formula (3).
The invention provides a method for extracting image features of a cluster recurrent neural network, learning the mapping from the image features to image categories through an incentive signal to form a mature classifier, and finishing image classification by using the classifier. The method mainly comprises the following steps: preprocessing an image to be recognized to form an input vector; extracting image features based on a neuron cluster recursive neural network; mapping image features to image classes based on reward signal learning, thereby forming a maturity classifier; . The classification of the images is achieved based on a maturity classifier.
The invention has the advantages that: the network structure and the learning method have the characteristics of simple structure, good classification result, certain universality and easy realization by hardware.
Drawings
FIG. 1 is a schematic diagram of a neuron cluster-based clustered recurrent neural network architecture of the present invention;
FIG. 2 is a flow chart of the training process of the mature classifier based on the cluster recurrent neural network of the present invention.
Detailed Description
The classification method based on the neuron cluster can be applied to general classification tasks, including but not limited to image classification, character classification, video data classification and the like. The following description of the embodiments of the present invention is provided for illustrative purposes, and it is to be understood that the present invention is not limited to the specific embodiments shown. The invention is further described with reference to the drawings by taking handwritten digit recognition as an example.
The invention discloses an image classification method based on a cluster recurrent neural network, which specifically comprises the following steps:
1. And preprocessing the image to be recognized to form an input vector.
1.1 initialize neural network parameters (FIG. 2 reference numeral 1). The hyper-parameters are configured, and the specific values after model tuning are set as shown in the following table.
1.2 preprocessing of the image data, as shown in figure 2 by reference numeral 2. And finally converting the input picture data into a one-dimensional feature vector.
1.2.1. the handwritten identifying digital data set MNIST is in this example converted from a binary format into a picture in PNG format. The digital labels are divided according to a training set or a test set, and are classified according to the corresponding digital labels and placed into corresponding folders for standby. Each picture is processed into a grey-scale map and finally converted into a two-dimensional matrix, wherein the shape of the matrix is 28 x 28.
1.2.2. And performing feature extraction on the two-dimensional picture matrix by using a Gabor filter. The input matrix is scaled to 8 different size matrices representing 8 scaled pictures. Two pictures of adjacent scaling are grouped together to finally generate 4 groups. The invention adopts a total of 8-direction Gabor filters to carry out filtering operation (theta is 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees). σ is the bandwidth of the whole filter, λ is the wavelength, γ is the spatial aspect ratio, and their specific values have been set when initializing the neural network parameters in step 1. Using a filtering operation, an input 28 × 28 dimensional picture is converted into an S1 layer feature matrix. The filter is constructed according to the following formula:
x=x cosθ+y sinθ
(8)
y=-x sinθ+y cosθ
(9)
1.2.3. And performing maximum pooling operation on the same direction matrix of the filters in each group output by the S1 layer in the step 1.2.2 to form a C1 layer feature matrix.
1.2.4. And (3) carrying out dimension extrusion operation on all the matrixes output by the C1 layer in the step 1.2.3 to form a feature vector of 2560 x 1 dimension.
2. and (4) image feature extraction based on a neuron cluster recursive neural network. The structure of the constructed neural network is shown in figure 1.
2.1. the one-dimensional feature vector after the step 1 pre-processing is named as I ═ I1, I2.., Im ] T as the input layer, where m ═ 2560.
2.2. A recursive layer M is constructed, the number of neurons in this layer being n 100000. All neurons were randomly divided into 10000 equally sized clusters, i.e. each cluster contained n/k neurons, and the neuron cluster was designated Clusteri, i1, 2. Neurons within a neuron cluster are not connected to each other, while neurons within a neuron cluster are connected to each other.
2.3. a connection matrix W between the input layer and the recursive layer M is determined. The input neuron and the recurrent neuron are connected with a probability p of 0.1, namely the probability of 90% of the connections between the input neuron and the recurrent neuron is 0, and the non-0 value is randomly assigned according to a standard Gaussian distribution (the mean value is 0 and the variance is 1).
And 2.4 calculating the activity of each neuron in the neuron cluster. The neuron in the cluster determines whether the neuron fires according to the input value of the neuron, the neuron with the largest input value of the neuron in the cluster is activated, and other neurons in the cluster are in a resting state. The neuron in the activated state has an output value of 1, and the neuron in the resting state has an output value of 0. The input values Y of the individual neurons in the recurrent layer M are (fig. 2 reference 4):
Y=WI
(1)
3. image features are mapped to image classes based on reward signal learning, forming a mature classifier. The method comprises the following steps:
3.1 define the output layer representing the category. The image has a total of 10 different classes (i.e. 10 numbers from 0 to 9), and the output layer has 10 neurons, whose output is denoted Z ═ Z1, Z2, …, zl ] T. And (3) connecting the recursive layer M to the output layer by a matrix H, and randomly assigning values to each element in the matrix according to uniform distribution between [0 and 1 ]. Thus, the output is:
Z=HY
(2)
Only one neuron in the output layer is active, which corresponds to a specific class. In order to calculate the output value of each neuron of the output layer, each element in the column vector Z of the output layer is firstly calculated as follows:
Thus, a column vector P ═ P1, P2.., pl ] T is obtained, and then an index of the maximum value in P is taken to obtain K ═ argmax (P), where K is the category number of the input image.
3.2 based on reward signal modulation recursion layer M to output layer between the matrix H, complete the characteristic to the learning of classification, including the following steps:
3.2.1. calculating the reward value R according to the category label T (if l is in the category, T belongs to [1, l ]) and the prediction result K of the neural network, namely:
3.2.2. Adjusting the weight of a connection matrix H between the output layer and the recursion layer according to the column vector P obtained in the step 3.1) (reference numeral 5 in FIG. 2):
H[K,:]=H[K,:]+η*(R-P)*M
(5)
where η is the learning rate, MT is the transpose of the recursive layer column vector, and t is the number of iterations. Ht [ K,: represents the kth row vector of the connection matrix H between the recursive layer and the output layer at t iterations.
3.2.3. The connection matrix H is normalized. And if the element value in the matrix is larger than the threshold value c, the connection is considered to exist and should be assigned to be 1, otherwise, the connection is considered to not exist and should be assigned to be 0. This results in an updated connection matrix H:
3.2.4, stopping the steps until the iteration number reaches a preset target, and storing the connection matrix of the network (figure 2, reference numeral 6).
4. the classification of the images is achieved based on a maturity classifier.
4.1 classifying the images the same preprocessing as in step 1 and forming the input vectors.
4.2 the features of the input vector will be extracted as in equation (1). The characteristic extraction by adopting the recursive cluster neural network not only completes the mapping of the characteristic from low dimensionality to high dimensionality, but also ensures that the characteristic has sparsity due to an activity calculation mode exclusively shared by winners in the recursive network.
4.3 calculating the output value of the feature vector according to the formula (2), wherein the connection matrix H stored after forming the mature classifier as mentioned in the step 3.2.3 is adopted in the formula (2).
4.4 the classification result of the image is obtained according to the formula (3), and the test accuracy on the MNIST data set is 98.3%.

Claims (2)

1. An image classification method based on a cluster recurrent neural network specifically comprises the following steps:
step 1, preprocessing an image to be recognized to form an input vector;
step 2, an image feature extraction method based on a neuron cluster recurrent neural network; constructing a high-dimensional recursion layer with a neuron cluster, and completing feature extraction in the neuron cluster in a way of exclusive sharing of a winner; the method comprises the following steps:
2.1. taking an input vector after image preprocessing to be recognized as an input of a network, and recording as a one-dimensional column vector I containing m neurons, [ I1, I2,.. multidot.im ] T;
2.2. constructing a recursion layer M containing n neurons and forming a neuron cluster; randomly dividing all neurons into k clusters with the same size, namely each cluster comprises n/k neurons, and marking the neuron cluster as Clusteri, i-1, 2.., k; neurons in a neuron cluster are not connected to each other, while neurons in a neuron cluster are connected to each other;
2.3. determining a connection matrix W between the input layer and the recursive layer M; the input neuron and the recursion neuron are connected with each other by the probability p of 0.1, namely the probability value of 90% of the connection between the input neuron and the recursion neuron is 0, and the non-0 value is randomly assigned according to the standard Gaussian distribution (the mean value is 0 and the variance is 1);
2.4 calculating the activity of each neuron in the neuron cluster; the neuron in the cluster determines whether the neuron emits or not according to the input value of the neuron, the neuron with the largest input value of the neuron in the cluster is activated, and other neurons in the cluster are in a resting state; the output value of the neuron in the activated state is 1, and the output value of the neuron in the resting state is 0; the input value Y of each neuron in the recursion layer M is:
Y=WI
(1)
step 3, mapping the image characteristics into image categories based on reward signal learning, thereby forming a mature classifier; the method comprises the following steps:
3.1 defining an output layer representing a category; if the processed image has a total of l different classes, the output layer has l neurons, whose output is denoted Z ═ Z1, Z2, …, zl ] T; a connection matrix H from the recursion layer M to the output layer, wherein each element in the matrix is randomly assigned according to the uniform distribution of [0, 1 ]; thus, the output is:
Z=HY
(2)
only one neuron in the output layer is active, which neuron corresponds to a specific class;
in order to calculate the output value of each neuron of the output layer, each element in the column vector Z of the output layer is firstly calculated as follows:
Obtaining a column vector P ═ P1, P2.., pl ] T, and then obtaining K ═ argmax (P) by taking the index of the maximum value in P, where K is the category number of the input image;
3.2 based on reward signal modulation recursion layer M to output layer between the matrix H, complete the characteristic to the learning of classification, including the following steps:
3.2.1. Calculating the reward value R according to the category label T (if l is in the category, T belongs to [1, l ]) and the prediction result K of the neural network, namely:
3.2.2. Adjusting the weight of a connection matrix H between the output layer and the recursion layer according to the column vector P obtained by calculation in the step 3.1):
H[K,:]=H[K,:]+η*(R-P)*M
(5)
wherein eta is the learning rate, MT is the transposition of the column vector of the recursive layer, and t is the iteration number; ht [ K,: represents the kth row vector of the connection matrix H between the recursive layer and the output layer at t iterations;
3.2.3. normalizing the connection matrix H; if the element value in the matrix is larger than the threshold value c, the connection is considered to exist and should be assigned to be 1, otherwise, the connection is considered to not exist and should be assigned to be 0; this results in an updated connection matrix H:
3.2.4, stopping the steps until the iteration times reach a preset target, and storing a connection matrix of the network to form a mature classifier;
Step 4, classifying the images based on a mature classifier;
4.1, classifying the images according to the same pretreatment of the step 1, and forming an input vector;
4.2 extracting the characteristics of the input vector according to the formula (1); the recursive cluster neural network is adopted to extract the features, so that the mapping of the features from low dimensionality to high dimensionality is completed, and the features have sparsity due to an activity calculation mode exclusively shared by winners in the recursive network;
4.3 calculating the output value of the feature vector according to the formula (2), wherein the connection matrix H stored after the mature classifier is formed as the step 3.2.3 is adopted in the formula (2);
4.4 obtaining the classification result of the image according to the formula (3).
2. The cluster-recurrent neural network-based image classification method of claim 1, wherein: the winning neurons in the cluster of the recurrent neural network are determined according to their activity and probability.
CN201910638362.6A 2019-07-16 2019-07-16 Image classification method based on cluster recurrent neural network Active CN110543888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910638362.6A CN110543888B (en) 2019-07-16 2019-07-16 Image classification method based on cluster recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910638362.6A CN110543888B (en) 2019-07-16 2019-07-16 Image classification method based on cluster recurrent neural network

Publications (2)

Publication Number Publication Date
CN110543888A true CN110543888A (en) 2019-12-06
CN110543888B CN110543888B (en) 2020-12-25

Family

ID=68709625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910638362.6A Active CN110543888B (en) 2019-07-16 2019-07-16 Image classification method based on cluster recurrent neural network

Country Status (1)

Country Link
CN (1) CN110543888B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012113A (en) * 2021-03-01 2021-06-22 和远智能科技股份有限公司 Automatic detection method for bolt looseness of high-speed rail contact network power supply equipment
CN116797851A (en) * 2023-07-28 2023-09-22 中国科学院自动化研究所 Brain-like continuous learning method of image classification model, image classification method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132525A1 (en) * 2004-05-27 2006-06-22 Silverbrook Research Pty Ltd Printer controller for at least partially compensating for erroneous rotational displacement
CN102262198A (en) * 2011-04-20 2011-11-30 哈尔滨工业大学 Method for diagnosing faults of analog circuit based on synchronous optimization of echo state network
US20120011089A1 (en) * 2010-07-08 2012-01-12 Qualcomm Incorporated Methods and systems for neural processor training by encouragement of correct output
CN107122375A (en) * 2016-12-12 2017-09-01 南京理工大学 The recognition methods of image subject based on characteristics of image
CN108629401A (en) * 2018-04-28 2018-10-09 河海大学 Character level language model prediction method based on local sensing recurrent neural network
CN109118014A (en) * 2018-08-30 2019-01-01 浙江工业大学 A kind of traffic flow speed prediction technique based on time recurrent neural network
CN109240280A (en) * 2018-07-05 2019-01-18 上海交通大学 Anchoring auxiliary power positioning system control method based on intensified learning
CN109784196A (en) * 2018-12-20 2019-05-21 哈尔滨工业大学深圳研究生院 Visual information, which is sentenced, knows method, apparatus, equipment and storage medium
CN109829541A (en) * 2019-01-18 2019-05-31 上海交通大学 Deep neural network incremental training method and system based on learning automaton

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132525A1 (en) * 2004-05-27 2006-06-22 Silverbrook Research Pty Ltd Printer controller for at least partially compensating for erroneous rotational displacement
US20120011089A1 (en) * 2010-07-08 2012-01-12 Qualcomm Incorporated Methods and systems for neural processor training by encouragement of correct output
CN102262198A (en) * 2011-04-20 2011-11-30 哈尔滨工业大学 Method for diagnosing faults of analog circuit based on synchronous optimization of echo state network
CN107122375A (en) * 2016-12-12 2017-09-01 南京理工大学 The recognition methods of image subject based on characteristics of image
CN108629401A (en) * 2018-04-28 2018-10-09 河海大学 Character level language model prediction method based on local sensing recurrent neural network
CN109240280A (en) * 2018-07-05 2019-01-18 上海交通大学 Anchoring auxiliary power positioning system control method based on intensified learning
CN109118014A (en) * 2018-08-30 2019-01-01 浙江工业大学 A kind of traffic flow speed prediction technique based on time recurrent neural network
CN109784196A (en) * 2018-12-20 2019-05-21 哈尔滨工业大学深圳研究生院 Visual information, which is sentenced, knows method, apparatus, equipment and storage medium
CN109829541A (en) * 2019-01-18 2019-05-31 上海交通大学 Deep neural network incremental training method and system based on learning automaton

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHEWEI ZHANG,ET AL.: "A neural network model for the orbitofrontal cortex and task space acquisition during reinforcement learning", 《COMPUTATIONAL BIOLOGY》 *
居琰: "基于多层次信息融合的手写体汉字识别研究", 《中国优秀博硕士学位论文全文数据库 (博士) 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012113A (en) * 2021-03-01 2021-06-22 和远智能科技股份有限公司 Automatic detection method for bolt looseness of high-speed rail contact network power supply equipment
CN113012113B (en) * 2021-03-01 2023-04-07 和远智能科技股份有限公司 Automatic detection method for bolt looseness of high-speed rail contact network power supply equipment
CN116797851A (en) * 2023-07-28 2023-09-22 中国科学院自动化研究所 Brain-like continuous learning method of image classification model, image classification method and device
CN116797851B (en) * 2023-07-28 2024-02-13 中国科学院自动化研究所 Brain-like continuous learning method of image classification model, image classification method and device

Also Published As

Publication number Publication date
CN110543888B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
Jogin et al. Feature extraction using convolution neural networks (CNN) and deep learning
Guo et al. Simple convolutional neural network on image classification
Schmidhuber et al. Multi-column deep neural networks for image classification
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN110309856A (en) Image classification method, the training method of neural network and device
CN112488205B (en) Neural network image classification and identification method based on optimized KPCA algorithm
Meftah et al. Novel approach using echo state networks for microscopic cellular image segmentation
EP2724297A1 (en) Method and apparatus for a local competitive learning rule that leads to sparse connectivity
Fu et al. An ensemble unsupervised spiking neural network for objective recognition
CN110674774A (en) Improved deep learning facial expression recognition method and system
Malinowski et al. Learning smooth pooling regions for visual recognition
Lagani et al. Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks
Alom et al. Object recognition using cellular simultaneous recurrent networks and convolutional neural network
CN110543888B (en) Image classification method based on cluster recurrent neural network
Da et al. Brain CT image classification with deep neural networks
CN110188621B (en) Three-dimensional facial expression recognition method based on SSF-IL-CNN
Lagani et al. Training convolutional neural networks with competitive hebbian learning approaches
Chauhan et al. Empirical study on convergence of capsule networks with various hyperparameters
Shi et al. Sparse CapsNet with explicit regularizer
Ahmed et al. Branchconnect: Image categorization with learned branch connections
Dong et al. GrCAN: gradient boost convolutional autoencoder with neural decision forest
Wadhwa et al. Learning sparse, distributed representations using the hebbian principle
Marini Artificial neural networks
Dolgikh Sparsity Constraint in Unsupervised Concept Learning.
Hu et al. Tree species identification based on the fusion of multiple deep learning models transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant