CN110084303A - One kind is based on CNN and the more grain size characteristic selection methods of RF higher-dimension - Google Patents

One kind is based on CNN and the more grain size characteristic selection methods of RF higher-dimension Download PDF

Info

Publication number
CN110084303A
CN110084303A CN201910347785.2A CN201910347785A CN110084303A CN 110084303 A CN110084303 A CN 110084303A CN 201910347785 A CN201910347785 A CN 201910347785A CN 110084303 A CN110084303 A CN 110084303A
Authority
CN
China
Prior art keywords
feature
data
model
fselcnn
grain size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910347785.2A
Other languages
Chinese (zh)
Other versions
CN110084303B (en
Inventor
刘磊
孙应红
陈圣
侯良文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910347785.2A priority Critical patent/CN110084303B/en
Publication of CN110084303A publication Critical patent/CN110084303A/en
Application granted granted Critical
Publication of CN110084303B publication Critical patent/CN110084303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to one kind based on CNN and the more grain size characteristic selection methods of RF higher-dimension, belongs to technical field of information processing.The present invention is based on the more grain size characteristic data sets of high latitude, in conjunction with deep learning algorithm and machine learning algorithm for solving the problems, such as that the more grain size characteristics of high latitude extract.First with a kind of FSelCNN model of deep learning algorithm CNN model construction, Monosized powder is converted from more granularities for legacy data by the model, so that the data are as data needed for machine learning algorithm;The validity feature for influencing practical problem is finally selected from the data of the high latitude using machine learning algorithm RF.The present invention is converted into Monosized powder dimension from more granularity dimensions from the single features level of the more grain size characteristic data of high latitude, by it, efficiently solves computational complexity;Model reduces parameter amount, can train completion within a short period of time;Suitable for the various high more granularity datas of latitude, adaptive ability is stronger, and has preferable effect.

Description

One kind is based on CNN and the more grain size characteristic selection methods of RF higher-dimension
Technical field
The invention belongs to technical field of information processing, are related to one kind based on CNN and the more grain size characteristic selection methods of RF higher-dimension.
Background technique
With the explosive growth of Internet era data, there are various forms of data characteristicses, is badly in need of a kind of efficient Method come solve the problems, such as various forms of data bands come, to preferably provide efficiently data branch for machine learning model It holds, and effective response data bring actual effect.Also, machine learning is in actual application, and Feature Engineering is at it In play irreplaceable role.In machine learning field it is believed that the upper bound of machine learning algorithm depend on data with Feature Engineering, and final model only constantly approaches this upper bound by way of linear and nonlinear.
Therefore, how to carry out Feature Engineering becomes the significant process of machine learning.Feature Engineering is by feature selecting and feature Two parts are extracted to form.There are various algorithms corresponding to solve the problems, such as based on Feature selection and extraction.Such as SVD (Singular Value Decomposition) algorithm, PCA (Principal Component Analysis) algorithm, LDA (Linear Discriminant Analysis) algorithm and deep learning algorithm etc., these algorithms can be preferably by high latitudes Data characteristics is mapped in low-dimensional feature space, solves the problems, such as dimension disaster.PCA algorithm, it is that one kind passes through principal component analysis Unsupervised dimension-reduction algorithm, it does not depend on the classification with sample, can effectively the data characteristics by high latitude without label map To the lower dimensional space of certain bits dimension.And LDA be then one kind can by sample label by high number of latitude according to Feature Mapping to low-dimensional Feature space has supervision algorithm.Minimize loss function constantly by sample label to select optimal feature.
Although final is all by some however, the above algorithm solves the problems, such as dimension disaster in dimension The linear combination of feature merges feature, extracts most influential feature, and there is no specific to some feature, this can not expire Some specific problems of foot, such as extract 25 positions from 42 positions of human body, balance the elderly by this 25 positions Balance ability.For such problem, the method for traditional PCA and LDA can not meet its demand.
For problems above, this patent proposes a kind of based on CNN (Convolutional Neural Networks) and the feature selection approach of random forest RF (Random Forest), this method pass through the characteristic pair for utilizing CNN Legacy data carries out dimensionality reduction.Then efficient feature is selected using RF to find out the main feature factor for influencing practical problem. This patent tests the validity of this method by taking the elderly's balance capacity data collection as an example.
Summary of the invention
The selection method based on CNN Yu the more grain size characteristics of RF higher-dimension that the invention proposes a kind of, and it is based on the more granularities of high latitude Characteristic data set proposes a kind of method for selecting the more grain size characteristics of high latitude in conjunction with deep learning algorithm and machine learning algorithm. First with a kind of deep learning algorithm CNN model construction FSelCNN (Feature Select Convolutional Neural Networks) model, Monosized powder is converted from more granularities for legacy data by the model so that the data at For data needed for machine learning algorithm;It is practical that influence is finally selected from the data of the high latitude using machine learning algorithm RF The validity feature of problem.For influencing the data characteristics of the elderly's balance ability, that carrys out illustration method has for experimental analysis Effect property.
To achieve the above object, the present invention adopts the following technical scheme that
One kind is more based on CNN (Convolutional Neural Networks) and RF (Random Feature) higher-dimension Grain size characteristic selection method, comprising the following steps:
It is 1, a kind of to be based on the more grain size characteristic selection methods of CNN and RF higher-dimension, comprising the following steps:
Step 1 data indicate
It is assumed that data set table is shown as D={ X1,X2,…,Xn, wherein each data point
Xi=(x1,x2,…,xm) (i=1,2 ..., n), each feature xj=(xj1,xj2,…,xjl) (j=1,2 ..., m).
Then each data point X ∈ D can be expressed as matrix A:
Every a line of matrix A indicates a feature of data point X, and this feature is distributed in l different dimensions, at this Each feature is referred to as more grain size characteristics in text.
Step 2 construct FSelCNN (Feature Select CNN) model based on CNN technology, to more grain size characteristics into Row dimensionality reduction
2.1 data prediction
By the corresponding matrix A analogy of data point X at the picture of a m × l, pixel is each granularity, i.e., each spy Sign is in the corresponding value of different dimensions.
2.2 are based on CNN constructed fuction f
Find a function f=f (A), f (A) ∈ Rm, i.e., each feature is mapped to from l different grain size by function f In the one-dimensional space, so that each feature has an attribute value.In deep learning, the volume in convolutional neural networks CNN is utilized Product operation can extract feature significantly more efficient for machine learning algorithm from initial data.Therefore herein with convolutional Neural The convolution operation of network is basic constructed fuction f, i.e. model FSelCNN, schematic diagram is as shown in Figure 2.
In Fig. 2, FSelCNN model is first by matrix A ∈ Rm×l, convolution is carried out by the M convolution kernels having a size of 1 × l Operation, step-length step=1, this convolution operation are denoted as Φ1;Secondly, again by one having a size of 1 × 1 convolution kernel in depth On convolution operation, step=1 are further done to it, this convolution operation is denoted as Φ2;Finally by a full articulamentum output knot Fruit.Function f can be expressed as follows with mathematic sign:
Wherein, σ is activation primitive, function expression are as follows:wji, bjFor to training parameter.
Each convolution kernel passes through m convolution operation Φ1, obtain intermediate variable h, the h ∈ R of m × 1m×1, and can It is denoted as column vector:
hi=(Φi1i2,…,Φim)T, i=1,2 ..., l (formula 2)
Then have:
Wherein u, bji, bjFor to training parameter.
Therefore ultimately constructed function f is as follows:
F=(Φ2122,…,Φ2m) (formula 4)
2.3 training FSelCNN models, determine function f
Entire data set is denoted as D={ (X first1,y1),(X2,y2)…,(Xn,yn), wherein yi, (i=1,2 ..., n) Indicate class label;Secondly data set D is divided into training set and test set, is trained by deep learning frame.Work as mould Type on test set accuracy rate Acc ∈ (α, 1] when, α be a constant, extract network in Flatten layer, obtain function
F=(Φ2122,…,Φ2m)
Wherein accuracy rateClassify correct sample number divided by total sample number.
Step 3 constructs data set and carries out feature selecting using RF technology
3.1 construction data set D '=((f1,y1),(f2,y2),…,(fn,yn)), data fi∈Rm(i=1,2 ... it n) can be with It is obtained by formula 4, yi(i=1,2 ..., n) it is class label.
3.2 obtain feature by training RF model
Data set D ' is divided into training set and test set first, the training on training set using RF technology, so that model On test set accuracy rate Acc ∈ (β, 1], β is a constant;Again by trained RF model, m feature is exported to (F1, I1),(F2,I2),…,(Fm,Im), IiIt is characterized FiCorresponding importance.
3.3 select feature by the importance of feature
By feature to (F1,I1),(F2,I2),…,(Fm,Im) according to feature importance IiIt sorts from large to small, selects preceding N Feature needed for a.
Step 4 experimental analysis
Beneficial effect
(1) present invention is first from the single features level of the more grain size characteristic data of high latitude, by it from more granularity dimensions It is converted into Monosized powder dimension, efficiently solves computational complexity;
(2) model proposed by the present invention reduces parameter amount, can train completion within a short period of time;
(3) present invention is suitable for the various high more granularity datas of latitude, and adaptive ability is stronger, and has preferable effect, It can be by final feature visualization (embodying features that will be selected).
Detailed description of the invention
Fig. 1 flow chart of the method for the present invention;
Fig. 2 FSelCNN model structure schematic diagram.
Specific embodiment
A specific embodiment of the invention combination Fig. 2 is described in further detail, following embodiment is for illustrating this Invention, but be not intended to limit the scope of the invention.
Its specific implementation step is as follows:
Step 1 data indicate
With the elderly's data instance, data set table is shown as D={ X1,X2,…,Xn, share 13500 datas, i.e. n= 13500, wherein each data point Xi=(x1,x2,…,xm)=(x1,x2,…,x42) (i=1,2 ..., n), wherein m=42, often A feature xj=(xj1,xj2,…,xjl)=(xj1,xj2,xj3) (j=1,2 ..., m), wherein l=3.Then each data point X ∈ D can be expressed as matrix A:
Every a line of matrix A indicates a feature of the elderly's data point X, and this feature is distributed in 3 different dimensions Degree, is referred to as more grain size characteristics for each feature herein.Specific original data mode such as the following table 1:
The former data instance of table 1
Step 2 construct FSelCNN (Feature Sellect CNN) model based on CNN technology, to more grain size characteristics into Row dimensionality reduction
2.1 data prediction
By the corresponding matrix A analogy of each data point X at one 42 × 3 picture, pixel is each granularity xij(i =1,2 ..., 42;J=1,2,3), i.e., each feature is in the corresponding value of different dimensions.Following matrix A:
2.2 are based on CNN (Convolutional Neural Networks) constructed fuction f
Look for a function f=f (A), f (A) ∈ R42, i.e., each feature is mapped to one by function f from 3 different grain sizes In dimension space, so that each feature has a granularity.In deep learning, the convolution in convolutional neural networks CNN is utilized Operation can extract feature significantly more efficient for machine learning algorithm from initial data.Therefore herein with convolutional Neural net The convolution operation of network is basic constructed fuction f.I.e. model FSelCNN, schematic diagram are similar to Figure 2.
In Fig. 2, FSelCNN model is first by matrix A ∈ R42×3, by M=32 having a size of 1 × 3 convolution kernel into Row convolution operation, moving step length step=1, this convolution operation are denoted as Φ1, top-down carry out convolution operation, each convolution Operation will once form 42 × 1 column vectors, and 32 42 × 1 column vectors will be generated after this layer of convolution operation;Secondly, leading to again It crosses one and further does convolution operation, step=1 to it in depth having a size of 1 × 1 convolution kernel, this convolution operation is denoted as Φ2, 42 × 1 column vectors will be formed eventually by entire convolution operation;Result is exported finally by a full articulamentum.It can Function f is expressed as follows with mathematic sign:
Wherein, σ is activation primitive, function expression are as follows:wji, bjFor to training parameter.
Each convolution kernel passes through 42 convolution operation Φ1, obtain one 42 × 1 intermediate variable h, h ∈ R42×1, and Column vector can be denoted as:
hi=(Φi1i2,…,Φi42)T, i=1,2 ..., 42 (formula 2)
Then have:
Wherein u, bji, bjFor to training parameter.
Therefore ultimately constructed function f is as follows:
F=(Φ2122,…,Φ2,42) (formula 4)
2.3 training FSelCNN models, determine function f
Entire data set is denoted as D={ (X first1,y1),(X2,y2)…,(X13500,y13500), wherein yi, (i=1, 2 ..., 13500) class label label ∈ { 0,1 }, wherein 1 indicates that balanced capacity is poor, 0 indicates that balanced capacity is good;Secondly it will count It is divided into training set and test set according to collection D, is trained by deep learning frame keras.When standard of the model on test set True rate Acc ∈ (0.90,1] when, the Flatten layer in network is extracted, i.e. network will be by one after the completion of entire convolution operation A Flatten (flattening operation), obtains function
F=(Φ2122,…,Φ2,42)
Wherein accuracy rateClassify correct sample number divided by total sample number.
Step 3 constructs data set and carries out feature selecting using RF technology
3.1 construction data set D '=((f1,y1),(f2,y2),…,(f13500,y13500)), data
fi∈R42(i=1,2 ... 13500) by train input data X when FSelCNN model bring formula 4 are obtained, yi (i=1,2 ..., 13500) is class label label ∈ { 0,1 }, wherein 1 indicates that balanced capacity is poor, 0 indicates that balanced capacity is good.
Data set example is as shown in table 2 below:
Table 2 is to training dataset example
3.2 obtain feature by the training using RF technology
Data set D ' is divided into training set and test set first, then training set is marked off into a part as verifying and is collected, Parameter is constantly regulate by precision of the model on verifying collection, finally makes accuracy rate Acc ∈ of the model on test set (0.90,1];Again by trained random forest RF model, 42 features are exported to (F1,I1),(F2,I2),…,(F42, I42), IiIt is characterized FiCorresponding importance.Importance such as 42 feature of following table is as shown in table 3 below:
3 feature importance table of table
3.3 select feature by the importance of feature
By feature to (F1,I1),(F2,I2),…,(Fm,Im) according to feature importance IiIt sorts from large to small, selects preceding N =25 required features, successively are as follows: F28, F12, F18, F8, F38, F5, F36, F11, F19, F27, F29, F9, F33, F17, F3, F13, F21, F4, F39, F40, F35, F25, F7, F20, F30
Step 4 experimental analysis
Experimental result is as shown in table 4 below:
The comparison of 4 model of table
Model PCA LDA FSelCNN PCA+RF LDA+RF FSelCNN+RF
Acc (accuracy rate) 0.801 0.834 0.921 0.734 0.833 0.937
It is found by upper table 4, when 2.3 carry out more granularity dimensionality reductions, by utilizing PCA, the dimensionality reduction technologies such as LDA are mentioned with this patent Model out compares, it is found that precision of the model FselCNN that this patent proposes on the elderly's balanced capacity data set is remote Far more than original technology PCA, LDA.Therefore dimensionality reduction of the FselCNN as more grain size characteristics is selected.It is refined in 3.2 training RF special When sign, FselCNN+RF also due to conventional model combination.

Claims (1)

1. one kind is based on CNN and the more grain size characteristic selection methods of RF higher-dimension, comprising the following steps:
The data that step 1 treats feature selecting carry out canonical representation: data indicate D={ X1,X2,…,Xn, wherein every number Strong point
Xi=(x1,x2,…,xm) (i=1,2 ..., n), each feature xj=(xj1,xj2,…,xjl) (j=1,2 ..., m), then often One data point X ∈ D can be expressed as matrix A:
Wherein, every a line of matrix A indicates a feature of data point X, and this feature is distributed in l different dimensions, Each feature is referred to as more grain size characteristics herein;
Step 2 constructs the FSelCNN model based on CNN technology and training, wherein f indicates FSelCNN pattern function, the model For carrying out dimensionality reduction to more grain size characteristics;
Step 2.1 constructs FSelCNN model
FSelCNN model successively includes two convolutional layers, one Flatten layers and a full articulamentum, wherein the first convolution Layer includes the M parallel convolution kernels having a size of 1 × l, and the input of M convolution kernel is matrix Am×l, the convolution of each convolution kernel It operates identical and is Φ1, the column vector of m × 1 is exported after each convolution kernel convolution operation, then the first convolutional layer is defeated It is out matrix Bm×M;Second convolutional layer includes one having a size of 1 × 1 convolution kernel, is inputted as the output of the first convolutional layer, convolution Operation is Φ2, the output of the second convolutional layer passes through one Flatten layers again, exports the column vector of m × 1;Flatten layers Effect is to do the operation for being equivalent to reshape to the output of the second convolutional layer;
Step 2.2 trains FSelCNN model, determines function f
Firstly, constructing data set D={ (X with the data after canonical representation1,y1),(X2,y2)…,(Xn,yn), wherein yi,(i =1,2 ..., n) it is class label;Secondly data set D is divided into training set and test set, using training set and passes through depth Learning framework is trained FSelCNN model, when model on test set accuracy rate Acc ∈ (α, 1] when, α be a constant, The Flatten layer output of FSelCNN is extracted as a result, obtaining function f, training is completed, wherein accuracy rateClassify Correct sample number NrightDivided by total sample number N;
Step 3 carries out feature selecting using data of the random forest RF technology to pending feature selecting
3.1 construction data set D '=((f1,y1),(f2,y2),…,(fn,yn)), data fi∈Rm(i=1,2 ... n) by standardizing Data after expression are brought function f into and are obtained, yi(i=1,2 ..., n) is class label, identical as the label in step 2.2;
3.2 obtain feature by training RF model
Data set D ' is divided into training set and test set first, the training on training set using RF technology, so that model is being surveyed Examination collection on accuracy rate Acc ∈ (β, 1], β is a constant;Again by trained RF model, m feature is exported to (F1,I1), (F2,I2),…,(Fm,Im), IiIt is characterized FiCorresponding importance;
3.3 select feature by the importance of feature
By feature to (F1,I1),(F2,I2),…,(Fm,Im) according to feature importance IiIt sorts from large to small, selects needed for top n Feature.
CN201910347785.2A 2019-04-28 2019-04-28 CNN and RF based balance ability feature selection method for old people Active CN110084303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910347785.2A CN110084303B (en) 2019-04-28 2019-04-28 CNN and RF based balance ability feature selection method for old people

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910347785.2A CN110084303B (en) 2019-04-28 2019-04-28 CNN and RF based balance ability feature selection method for old people

Publications (2)

Publication Number Publication Date
CN110084303A true CN110084303A (en) 2019-08-02
CN110084303B CN110084303B (en) 2022-02-15

Family

ID=67417165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910347785.2A Active CN110084303B (en) 2019-04-28 2019-04-28 CNN and RF based balance ability feature selection method for old people

Country Status (1)

Country Link
CN (1) CN110084303B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169360A1 (en) * 2013-04-02 2017-06-15 Patternex, Inc. Method and system for training a big data machine to defend
CN106991374A (en) * 2017-03-07 2017-07-28 中国矿业大学 Handwritten Digit Recognition method based on convolutional neural networks and random forest
CN107480702A (en) * 2017-07-20 2017-12-15 东北大学 Towards the feature selecting and Feature fusion of the identification of HCC pathological images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169360A1 (en) * 2013-04-02 2017-06-15 Patternex, Inc. Method and system for training a big data machine to defend
CN106991374A (en) * 2017-03-07 2017-07-28 中国矿业大学 Handwritten Digit Recognition method based on convolutional neural networks and random forest
CN107480702A (en) * 2017-07-20 2017-12-15 东北大学 Towards the feature selecting and Feature fusion of the identification of HCC pathological images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUNDONG WANG,ET AL: "《Dialogue act recognition for Chinese out-of-domain utterances using hybrid CNN-RF》", 《2016 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING》 *
付炜,等: "《基于卷积神经网络和随机森林的音频分类方法》", 《计算机应用》 *

Also Published As

Publication number Publication date
CN110084303B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
Zhu et al. High performance vegetable classification from images based on alexnet deep learning model
CN106295507B (en) A kind of gender identification method based on integrated convolutional neural networks
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN111242208A (en) Point cloud classification method, point cloud segmentation method and related equipment
CN110288030A (en) Image-recognizing method, device and equipment based on lightweight network model
CN109784153A (en) Emotion identification method, apparatus, computer equipment and storage medium
CN104408483B (en) SAR texture image classification methods based on deep neural network
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
CN106845528A (en) A kind of image classification algorithms based on K means Yu deep learning
CN109086886A (en) A kind of convolutional neural networks learning algorithm based on extreme learning machine
CN110399895A (en) The method and apparatus of image recognition
CN110264407B (en) Image super-resolution model training and reconstruction method, device, equipment and storage medium
CN109344898A (en) Convolutional neural networks image classification method based on sparse coding pre-training
Meng et al. Few-shot image classification algorithm based on attention mechanism and weight fusion
CN111046920A (en) Method for training food image classification model and image classification method
CN109918542A (en) A kind of convolution classification method and system for relationship diagram data
CN109816030A (en) A kind of image classification method and device based on limited Boltzmann machine
CN109711442A (en) Unsupervised layer-by-layer generation fights character representation learning method
Pathak et al. Classification of fruits using convolutional neural network and transfer learning models
Wang et al. A new Gabor based approach for wood recognition
Cui et al. Maize leaf disease classification using CBAM and lightweight Autoencoder network
CN106803105A (en) A kind of image classification method based on rarefaction representation dictionary learning
Perveen et al. Multidimensional Attention-Based CNN Model for Identifying Apple Leaf Disease.
Li et al. Common pests classification based on asymmetric convolution enhance depthwise separable neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant