CN105631476A - Identification method for matrix variable RBM - Google Patents

Identification method for matrix variable RBM Download PDF

Info

Publication number
CN105631476A
CN105631476A CN201510994184.2A CN201510994184A CN105631476A CN 105631476 A CN105631476 A CN 105631476A CN 201510994184 A CN201510994184 A CN 201510994184A CN 105631476 A CN105631476 A CN 105631476A
Authority
CN
China
Prior art keywords
matrix
training
rbm
sample
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510994184.2A
Other languages
Chinese (zh)
Other versions
CN105631476B (en
Inventor
齐光磊
孙艳丰
胡永利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510994184.2A priority Critical patent/CN105631476B/en
Publication of CN105631476A publication Critical patent/CN105631476A/en
Application granted granted Critical
Publication of CN105631476B publication Critical patent/CN105631476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention discloses an identification method for a matrix variable RBM. Computation complexity of training and derivation can be greatly reduced by the identification method, space information in 2D matrix data can be maintained in the training and test process and a great effect is achieved in the reconstruction process so that the identification method can be applied to a more complex data structure. The method comprises the steps that (1) training phase: sample training (which is expressed in the specification) is performed according to the matrix variable RBM of the formula (4), wherein X (which is expressed in the specification) is a binary visible layer matrix variable, Y (which is expressed in the specification) is a binary hidden layer matrix variable, theta refers to all the model parameters U, V, B and C, and a normalization constant Z(theta) is defined as an expression (which is expressed in the specification) in which x and y refer to binary value space of X and Y; and (2) classification phase: the hidden layer matrix variable is vectorized, training is performed by applying a K-NN method, and test images are classified according to minimum residual error.

Description

The recognition methods of a kind of matrix variables RBM
Technical field
The invention belongs to the technical field of pattern recognition, specifically relate to the recognition methods of a kind of matrix variables RBM.
Background technology
Boolean's Si graceful machine BoltzmannMachine (BM) is a kind of important probabilistic neural network, proposes in 1985 by Hinton and Sejnowski. But the constraint not connecting relation due to the graceful machine variable unit of traditional boolean Si causes effectively being applied in machine learning. Can being applied to actual model to build one, Hinton proposes the model structure that is called the restriction graceful machine of boolean Si, in this model, only there is connection relation between visible layer unit and hidden layer unit.
When being restricted between hidden layer and visible layer unit, RBM (RestrictedBoltzmannMachine, the graceful machine of limited boolean Si) model can be seen as the probability model having two-value variable. Recent years, RBMs extracts and ability to express due to its powerful feature, has been widely used in pattern recognition and machine learning field.
More given training datas, the target of training RBM model learns visible layer and the direct weights of hidden layer exactly so that the probability distribution represented by RBM is adapted to all learning sample as far as possible. The RBM model that one trains can provide input data effectively to represent according to the probability distribution that training data obtains.
What classical RBM model mainly described is the input data based on vector form or variable. But, deriving from the data in modern science and technology is more comparatively general structure. Such as, digital picture is exactly 2 dimension matrixes, contains spatial information in matrix. In order to be the data that classical RBM can be applied to such as 2D image, traditional method is exactly that 2D data vectorization is processed. But unfortunately, such process not only destroys high price image internal structure, causes have lost in structure the interactive information hidden, and due to the full connection between visible layer and hidden layer, result in the increase of model parameter.
Summary of the invention
The technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, the recognition methods of a kind of matrix variables RBM is provided, its computation complexity greatly reducing training and deriving, training and testing process maintains the spatial information in 2D matrix data and in restructuring procedure, obtains good effect simultaneously, it is possible to be applied to more complicated data structure.
The technical solution of the present invention is: the recognition methods of this kind of matrix variables RBM, and the method comprises the following steps:
(1) training stage: carry out sample training according to the matrix variables RBM of formula (4)
p ( X , Y ; Θ ) = 1 Z ( Θ ) exp { - E ( X , Y ; Θ ) } - - - ( 4 )
WhereinFor scale-of-two visible layer matrix variables,For scale-of-two hidden layer matrix variables, �� represents all model parameter U, V, B and C, and normalization method constant Z (��) is defined as
WhereinRepresent the scale-of-two valued space of X and YFor model weight matrix,For visible layer and bias matrix corresponding to hidden layer;
(2) classifying the stage: hidden layer matrix variables is carried out vectorization, application K-NN method is trained, and minimum according to residual error is the classification of test pattern picture.
The present invention needs the model parameter of study less than classical RBMs, and therefore the computation complexity of training and derivation has obvious minimizing; Visible layer and hidden layer are all matrix forms, therefore maintain the spatial information in 2D matrix data in training and testing process and obtain good effect in restructuring procedure simultaneously; The present invention can be extended to the tensor data of any rank number easily, therefore can be applied to more complicated data structure.
Accompanying drawing explanation
Fig. 1 shows classical RBM model.
Fig. 2 shows the RBM model of the present invention.
Fig. 3 shows the classification specific inaccuracy when fixed number of iterations and learning sample quantity.
Fig. 4 shows the classification specific inaccuracy of the different methods when learning sample quantity is different.
Embodiment
The recognition methods of this kind of matrix variables RBM, the method comprises the following steps:
(1) training stage: carry out sample training according to the matrix variables RBM of formula (4)
p ( X , Y ; Θ ) = 1 Z ( Θ ) exp { - E ( X , Y ; Θ ) } - - - ( 4 )
WhereinFor scale-of-two visible layer matrix variables,For scale-of-two hidden layer matrix variables, �� represents all model parameter U, V, B and C, and normalization method constant Z (��) is defined as
WhereinRepresent the scale-of-two valued space of X and YFor model weight matrix,For visible layer and bias matrix corresponding to hidden layer;
(2) classifying the stage: hidden layer matrix variables is carried out vectorization, application K-NN method is trained, and minimum according to residual error is the classification of test pattern picture.
The present invention needs the model parameter of study less than classical RBMs, and therefore the computation complexity of training and derivation has obvious minimizing; Visible layer and hidden layer are all matrix forms, therefore maintain the spatial information in 2D matrix data in training and testing process and obtain good effect in restructuring procedure simultaneously; The present invention can be extended to the tensor data of any rank number easily, therefore can be applied to more complicated data structure.
Preferably, described step (1) comprises step by step following:
(1.1) matrix type training sample set is definedMaximum iteration time T, study rate, weights canonical item, often organizes number of training, and CD algorithm K walks;
(1.2) random initializtion U and V, makes B=C=0 stochastic gradient �� U=�� V=�� B=�� C=0;
(1.3) iterative steps t=1 �� T carries out;
(1.4) at random willIt is divided into M groupSize is b;
(1.5) organize m=1 �� M to carry out;
(1.6) all data are carried out gibbs sampler under "current" model parameter
(1.7) k=0 �� K-1 carries out;
(1.8) according to formula (9) to sample Y(k)Sample
P (Y=1 | X; ��)=�� (UXVT+ C) (9);
(1.9) according to formula (8) to sample X(k)Sample
P (X=1 | Y; ��)=�� (UTYV+B) (8);
(1.10) renewal of gradient is carried out according to formula (18)
(1.11) according to formula ��=��+�� �� Renewal model parameter �� �� ��;
(1.12) terminate.
Preferably, maximum iteration time T is 10000, and study rate is 0.05, and weights canonical item is 0.01, and often group number of training is 100, CD algorithm K step is 1 step.
Illustrate in greater detail now the present invention.
1 model
Classical RBM [8,13] is the vector model of two values, and input and hidden layer are all vector forms. Model is such as Fig. 1, it is seen that layer unit (cubes) and hidden layer (cylinder) unit are full type of attachment.
The energy function model of RBM is:
E (x, y; ��)=-xTWy-bTx-cTy(1)
Wherein,It is binary visible layer unit and hidden layer unit,It is biased,Represent the connection weight of visible layer and hidden layer in neural network. ��={ b, c, w} are model parameter.
In order to introduce the MVRBM of the present invention, it is defined as follows symbol. DefinitionFor scale-of-two visible layer matrix variables,For scale-of-two hidden layer matrix variables. Assume independent random variable xijAnd yklFrom { value 0,1}.For fourth-order tenstor parameter, bias matrix isWithDefine following energy function.
E ( X , Y ; Θ ) = Σ i = 1 I Σ j = 1 J Σ k = 1 K Σ l = 1 L x i j y k l u k i v l j + Σ i = 1 I Σ j = 1 J x i j b i j + Σ k = 1 K Σ l = 1 L y k l c k l - - - ( 2 )
WhereinFor model parameter. A total I �� J �� K �� L+I �� J+K �� L free parameter in ��. Even if �� will be also a very big number when I, J, K, L are very little, a large amount of learning sample and very long time will be needed like this. In order to reduce the defeated of free parameter and saving computation complexity, it is assumed that the connection weights of hidden layer unit and visible layer unit have following relation: wijkl=ukivlj. By defining two new matrixesWithEnergy function (2) can be rewritten as following form,
E (X, Y)=-tr (UTYVXT)-tr(XTB)-tr(YTV)(3)
Matrix U and V common define input matrix X and implicit matrix Y connection weight, like this, in formula (2), the free parameter of �� is reduced to I �� K+L �� J+I �� J+K �� L in formula (3).
Based on formula (3), it is defined as follows distribution:
p ( X , Y ; Θ ) = 1 Z ( Θ ) exp { - E ( X , Y ; Θ ) } - - - ( 4 )
�� represents all model parameter U, V, B and C. Normalization method constant Z (��) is defined as
WhereinRepresent the scale-of-two valued space of X and Y.
Probability model in formula (4) is matrix variables RBM (MVRBM). Model is such as Fig. 2.
The learning algorithm of MVRBM for convenience of explanation, the conditional probability density for visible element and implicit unit proposes following lemma
Lemma 1.MVRBM model is defined by formula (3) (4). The conditional probability density of each visible layer unit is
p ( x i j = 1 | Y ; Θ ) = σ ( b i j + Σ k = 1 K Σ l = 1 L y k l u k i v l j ) - - - ( 6 )
The conditional probability density of each hidden layer unit is
p ( y k l = 1 | X ; Θ ) = σ ( c k l + Σ i = 1 I Σ j = 1 J x i j u k i v l j ) - - - ( 7 )
In formula, �� is S type function �� (x)=1/ (1+e-x)
Application matrix represents, two conditional probabilities can be written as:
P (X=1 | Y; ��)=�� (UTYV+B)(8)
P (Y=1 | X; ��)=�� (UXVT+C)(9)
Maximum likelihood function and the CD algorithm for MVRBM
For given sample setUnder formula (4) simultaneous distribution,Log-likelihood function be defined as
For arbitrary element �� in ��, we can prove
The Section 1 claiming (10) formula equal sign right side is Data expansion item, and Section 2 is model extension item.
Calculating the topmost problem of likelihood function gradient is computation model extension. Because the state that visible layer and hidden layer are all to be summed up by model extension item. But, CD algorithm allows to realize proximate calculation by a shorter Markov chain. The main thought of CD algorithm is by the initial value of a sample in sample set as gibbs chainCD-k algorithm utilizes the sample that kth walksBeing similar to as model extension item.
(11) are brought into (10), and we can obtain being similar to based on CD algorithm:
For all 4 class parameters of MVRBM, only calculateAs an example, the calculating of other parameters is analogized with this. From (3) formula, obtain
∂ E ( X , Y ; Θ ) ∂ U = - YVX T
Thus, formula (12) turns into
For binary variable Y (Y'), because
To (13) formula, have
With reason, other parameters can be obtained
In order to verify the validity of MVRBM algorithm in this paper, MNIST database is selected to carry out denoising, reconstruct and identification experiment herein. These experiments are to prove that MVRBM has better feature to extract and re-configurability than classical RBM.
MNIST handwritten numeral storehouse, amounts to 70,000 handwritten word pictures, normal conditions with 60,000 as learning sample and 10,000 as test sample book. Often magnifying little is the gray-scale map picture of 28 �� 28 pixels, it is possible to from website: http://yann.lecun.com/exdb/mnist/. downloads.
1.1 denoisings and reconstruct
In first experiment, it is intended that will show that the MVRBM trained may be used for falling dimension to data de-noising with to what reconstruct.
First, it is desirable to prove that MVRBM model can learn to information from data. In order to this object, random selection 5000 numeral 9 pictures from learning sample, arranging hidden layer matrix variables is 15 �� 15 sizes, it may also be useful to the optimum configurations in algorithm 1. Training process is iteration T=3000 time altogether. Meanwhile, denoising experiment has been carried out. At random to the salt-pepper noise adding 10% in test picture 9. Visible, denoising result is very good.
In another one experiment, carrying out training MVRBM model by 20,000 learning sample, training process is iteration T=3000 time altogether. But the size of hidden layer is set to 25 �� 25. For the MVRBM model of two values, the model parameter U that training obtains and V can be treated as wave filter or handleAs feature extractor. In image processing process, the wave filter that model learning arrives is very close to Ha Er (Haar) wave filter. The MVRBM model that simultaneously experiment test trains peacekeeping re-configurability falls. Illustrate some original samples, below be the image being represented reconstruct by low-dimensional. Average reconstructed error is 10.8488.
Experiment shows, model has good denoising, falls peacekeeping re-configurability, it is possible to effective study is to the feature of data.
2 handwritten word identifications
In this experiment, can as feature extractor in order to assess MVRBM. In fact, hidden layer can think a kind of new feature of visible layer. These new features training sorter can be utilized to classify. The same with most of sorter, we are launched into vector MVRBM hidden layer matrix character, then carry out classify (K=1) with k nearest neighbor sorter.
First, fixing hidden layer unit is 25 �� 25, and iteration number of times is T=2000, changes different number of training and tests, and number of training is from 100 to 20000. It Fig. 3 (a) is classification error rate.
Experiment shows, the more sufficient recognition effect of learning sample is more good.
Another one is tested, and we select 10000 learning sample at random, changes iteration number of times and trains. Iteration number of times is from 10 times to 3000 times. Fig. 3 (b) illustrates the classification error rate under different iteration number of times. It can be observed that when iteration number of times reaches 70 times time, MVRBM tends towards stability. When iteration number of times is increased to 3000 from 300, classification error rate is entered and is reduced to 0.0520 from 0.0571.
Based on these experimental observation data, Selection parameter N=20000 and T=3000 carries out and other models contrast experiment. In an experiment, it can be seen that accuracy rate is higher when MVRBM adopts learning sample to be 50000 time, error rate only 0.0359. Finding, only when 600 learning sample, we just can reach the error rate of 0.1387 simultaneously.
Finally, compare with some the most popular machine learning methods at present, comprise the deep neural network DeepNeuralNetwork (DNN) based on disappearance (drop-out), degree of depth convolution network DeepBeliefNetworks (DBN), convolutional neural networks ConvolutionalNeuralNetworks and sparse automatic coding SparseAutoencoder (SAE). The code of these models can from network address: https: downloading //github.com/rasmusbergpalm/DeepLearnToolbox, we use the default parameters in model to arrange. In Fig. 4 (a) and (b) MVRBM model parameter, iteration number of times T=3000. Experiment respectively show the result of (less than 10000) MVRBM and additive method when learning sample quantity is sufficient and not enough. Because MVRBM greatly reduces than the parameter of other models, therefore more difficult cause over-fitting.
Utilizing MVRBM to carry out handwritten word identification, algorithm is expressed as follows:
1. the training stage:
The CD-K algorithm 1 of MVRBM
Input: matrix type training sample set(default value is 10 to maximum iteration time T, 000), study rate (default value is 0.05) weights canonical item (default value is 0.01) is often organized number of training (default value is 100) CD algorithm K and is walked (default value is 1)
Output: model parameter ��={ U, V, B, C}
1. initialize: random initializtion U and V, makes B=C=0 stochastic gradient �� U=�� V=�� B=�� C=0
2.for iterative steps t=1 �� T carries out
3. at random willIt is divided into M groupSize be b then
4.for group m=1 �� M carries out
5. pair all data carry out gibbs sampler under "current" model parameter
6.fork=0 �� K-1 carries out
7. according to formula (9) to sample Y(k)Sample
8. according to formula (8) to sample X(k)Sample
9.endfor
10. carry out the renewal of gradient to (17) according to formula (14)
11. according to formula ��=��+�� �� Renewal model parameter �� �� ��
12.endfor
13.endfor
2. classifying the stage: hidden layer matrix variables is carried out vectorization, application K-NN method is trained, and minimum according to residual error is the classification of test pattern picture.
The above; it it is only the better embodiment of the present invention; the present invention not does any restriction in form, and every any simple modification, equivalent variations and modification above embodiment done according to the technical spirit of the present invention, all still belongs to the protection domain of technical solution of the present invention.

Claims (3)

1. the recognition methods of a matrix variables RBM, it is characterised in that, the method comprises the following steps:
(1) training stage: carry out sample training according to the matrix variables RBM of formula (4)
p ( X , Y ; Θ ) = 1 Z ( Θ ) exp { - E ( X , Y ; Θ ) } - - - ( 4 )
WhereinFor scale-of-two visible layer matrix variables,For scale-of-two hidden layer matrix variables, �� represents all model parameter U, V, B and C, and normalization method constant Z (��) is defined as
WhereinRepresent the scale-of-two valued space of X and YFor model weight matrix,For visible layer and bias matrix corresponding to hidden layer;
(2) classifying the stage: hidden layer matrix variables is carried out vectorization, application K-NN method is trained, and minimum according to residual error is the classification of test pattern picture.
2. the recognition methods of matrix variables RBM according to claim 1, it is characterised in that, described step (1) comprises step by step following:
(1.1) matrix type training sample set is definedMaximum iteration time T, study rate, weights canonical item, often organizes number of training, and CD algorithm K walks;
(1.2) random initializtion U and V, makes B=C=0 stochastic gradient �� U=�� V=�� B=�� C=0;
(1.3) iterative steps t=1 �� T carries out;
(1.4) at random willIt is divided into M groupSize is b;
(1.5) organize m=1 �� M to carry out;
(1.6) all data are carried out gibbs sampler under "current" model parameter
(1.7) k=0 �� K-1 carries out;
(1.8) according to formula (9) to sample Y(k)Sample
P (Y=1 | X; ��)=�� (UXVT+ C) (9);
(1.9) according to formula (8) to sample X(k)Sample
P (X=1 | Y; ��)=�� (UTYV+B) (8);
(1.10) renewal of gradient is carried out according to formula (18)
(1.11) according to formula ��=��+�� �� Renewal model parameter �� �� ��;
(1.12) terminate.
3. the recognition methods of matrix variables RBM according to claim 2, it is characterised in that, maximum iteration time T is 10000, and study rate is 0.05, and weights canonical item is 0.01, and often group number of training is 100, CD algorithm K step is 1 step, whereinFor model weight matrix,For visible layer and bias matrix corresponding to hidden layer.
CN201510994184.2A 2015-12-25 2015-12-25 A kind of recognition methods of matrix variables RBM Active CN105631476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510994184.2A CN105631476B (en) 2015-12-25 2015-12-25 A kind of recognition methods of matrix variables RBM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510994184.2A CN105631476B (en) 2015-12-25 2015-12-25 A kind of recognition methods of matrix variables RBM

Publications (2)

Publication Number Publication Date
CN105631476A true CN105631476A (en) 2016-06-01
CN105631476B CN105631476B (en) 2019-06-21

Family

ID=56046388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510994184.2A Active CN105631476B (en) 2015-12-25 2015-12-25 A kind of recognition methods of matrix variables RBM

Country Status (1)

Country Link
CN (1) CN105631476B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446117A (en) * 2016-09-18 2017-02-22 西安电子科技大学 Text analysis method based on poisson-gamma belief network
CN106886798A (en) * 2017-03-10 2017-06-23 北京工业大学 The image-recognizing method of the limited Boltzmann machine of the Gaussian Profile based on matrix variables

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814160A (en) * 2010-03-08 2010-08-25 清华大学 RBF neural network modeling method based on feature clustering
US20100318482A1 (en) * 2001-05-07 2010-12-16 Health Discovery Corporation Kernels for Identifying Patterns in Datasets Containing Noise or Transformation Invariances
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN104880945A (en) * 2015-03-31 2015-09-02 成都市优艾维机器人科技有限公司 Self-adaptive inverse control method for unmanned rotorcraft based on neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100318482A1 (en) * 2001-05-07 2010-12-16 Health Discovery Corporation Kernels for Identifying Patterns in Datasets Containing Noise or Transformation Invariances
CN101814160A (en) * 2010-03-08 2010-08-25 清华大学 RBF neural network modeling method based on feature clustering
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN104880945A (en) * 2015-03-31 2015-09-02 成都市优艾维机器人科技有限公司 Self-adaptive inverse control method for unmanned rotorcraft based on neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨钊: "面向图像分类和识别的视觉特征表达与学习的研究", 《中国博士学位论文全文数据库·信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446117A (en) * 2016-09-18 2017-02-22 西安电子科技大学 Text analysis method based on poisson-gamma belief network
CN106886798A (en) * 2017-03-10 2017-06-23 北京工业大学 The image-recognizing method of the limited Boltzmann machine of the Gaussian Profile based on matrix variables

Also Published As

Publication number Publication date
CN105631476B (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109685115B (en) Fine-grained conceptual model with bilinear feature fusion and learning method
CN109711481B (en) Neural networks for drawing multi-label recognition, related methods, media and devices
CN111414942B (en) Remote sensing image classification method based on active learning and convolutional neural network
CN109271522B (en) Comment emotion classification method and system based on deep hybrid model transfer learning
CN106991372B (en) Dynamic gesture recognition method based on mixed deep learning model
CN108170736B (en) Document rapid scanning qualitative method based on cyclic attention mechanism
CN109726657B (en) Deep learning scene text sequence recognition method
CN110110323B (en) Text emotion classification method and device and computer readable storage medium
CN110909801B (en) Data classification method, system, medium and device based on convolutional neural network
Wang et al. Learnable histogram: Statistical context features for deep neural networks
CN113077388B (en) Data-augmented deep semi-supervised over-limit learning image classification method and system
Xie et al. Visualization and Pruning of SSD with the base network VGG16
CN112861936B (en) Graph node classification method and device based on graph neural network knowledge distillation
CN109242097B (en) Visual representation learning system and method for unsupervised learning
Yan et al. Data augmentation for deep learning of judgment documents
Aljundi et al. Lightweight unsupervised domain adaptation by convolutional filter reconstruction
CN111078895B (en) Remote supervision entity relation extraction method based on denoising convolutional neural network
CN106157254A (en) Rarefaction representation remote sensing images denoising method based on non local self-similarity
Zilvan et al. Denoising convolutional variational autoencoders-based feature learning for automatic detection of plant diseases
Minh et al. Automated image data preprocessing with deep reinforcement learning
Lata et al. Data augmentation using generative adversarial network
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
van Stein et al. Doe2vec: Deep-learning based features for exploratory landscape analysis
CN105631476A (en) Identification method for matrix variable RBM
CN110019653B (en) Social content representation method and system fusing text and tag network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant