CN111626332B - Rapid semi-supervised classification method based on picture volume active limit learning machine - Google Patents

Rapid semi-supervised classification method based on picture volume active limit learning machine Download PDF

Info

Publication number
CN111626332B
CN111626332B CN202010341727.1A CN202010341727A CN111626332B CN 111626332 B CN111626332 B CN 111626332B CN 202010341727 A CN202010341727 A CN 202010341727A CN 111626332 B CN111626332 B CN 111626332B
Authority
CN
China
Prior art keywords
learning machine
graph
formula
classification data
disease classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010341727.1A
Other languages
Chinese (zh)
Other versions
CN111626332A (en
Inventor
张子佳
蔡耀明
龚文引
刘小波
蔡之华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202010341727.1A priority Critical patent/CN111626332B/en
Publication of CN111626332A publication Critical patent/CN111626332A/en
Application granted granted Critical
Publication of CN111626332B publication Critical patent/CN111626332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Abstract

The invention provides a quick semi-supervised classification method based on a graph volume positive limit learning machine, which comprises the following steps of: constructing a self-expression model of the disease classification data, and constructing a global robustness map of the disease classification data by using the self-expression model to obtain an adjacent matrix A of the disease classification data; calculating a random graph convolution model output H according to the adjacency matrix A; calculating output layer weight beta of the graph convolution limit learning machine according to the output H of the random graph convolution model; classifying the unlabeled disease classification data by using the output layer weight beta of the graph positive limit learning machine obtained through calculation; the invention has the beneficial effects that: introducing a graph convolution network to replace a hidden layer in the extreme learning machine method to form a brand-new graph convolution positive extreme learning machine model; the model can process non-European graph structure data, such as generalization to the fields of disease classification, biological information, chemical medicine and the like, and can simultaneously keep the quick learning speed and the universal approximant capability of an extreme learning machine.

Description

Rapid semi-supervised classification method based on picture volume active limit learning machine
Technical Field
The invention relates to the field of pattern recognition and data classification, in particular to a quick semi-supervised classification method based on a chart positive limit learning machine.
Background
Extreme learning machines (Extreme learning machines) are an important technology, and have been used with great success in the fields of medical/biological data analysis, computer vision, image processing, and system modeling and prediction. The extreme learning machine is a special case of a Random vector function-link network (Random vector function-link network), and is a single hidden layer feedforward neural network, in which a hidden layer is randomly generated, and an output weight value can be used to solve an analytic solution. Because the extreme learning machine avoids the training of the hidden layer, the method has the advantages of small calculation amount and high calculation speed.
Although the extreme learning machine has many performance advantages and wide application fields, the method can only operate on conventional European data such as text (one-dimensional sequence data) and pictures (two-dimensional grid data), and for non-European Graph structure data (Graph) such as non-European structure data of medical diseases and biomolecules, the traditional extreme learning machine has difficulty in directly processing the neighbor relation, so that Graph data mining by using the extreme learning machine is still an open problem to be solved.
In recent years, graph neural networks (graphyne networks) have gained considerable attention from researchers by virtue of their powerful performance advantages in graph structure data learning. Unlike conventional neural networks, graph neural networks obtain graph dependencies through information transfer between graph nodes. In particular, the graph neural network updates the hidden state of the nodes by aggregating information of neighboring nodes of each node. However, this technique has the drawback that it requires optimization depending on gradient descent, has a slow convergence speed and easily falls into local optimization.
Disclosure of Invention
In view of this, in order to extend the extreme learning machine to the graph structure data of non-european style, the invention provides a fast semi-supervised classification method based on the graph volume active limit learning machine, and the graph convolution is introduced in the extreme learning machine method to replace the hidden layer, so that the extreme learning machine method can have the capability of processing the graph structure data of non-european style, and the fast learning rate and the general approximation capability of the extreme learning machine method can be retained. Experiments prove that the method can be effectively applied to classification of disease data.
The invention provides a quick semi-supervised classification method based on a graph volume positive limit learning machine, which specifically comprises the following steps:
s101: constructing a self-expression model of the disease classification data, and constructing a global robustness map of the disease classification data by using the self-expression model to obtain an adjacency matrix A of the input disease classification data;
s102: calculating a random graph convolution model output H according to the adjacency matrix A;
s103: calculating the output layer weight beta of the graph convolution extreme learning machine according to the output H of the random graph convolution model by combining the extreme learning machine;
s104: classifying the unlabeled disease classification data by using the output layer weight beta of the graph convolution limit learning machine;
further, in step S101, a self-expression model of the inputted disease classification data is constructed, as shown in formula (1):
XTZ=XT,s.t.,diag(Z)=0 (1)
in the formula (1), the reaction mixture is,
Figure GDA0002928550670000021
representing an input set of disease classification data features; wherein N is the number of the disease classification data feature set samples, m is the dimension of the disease classification data feature set features,
Figure GDA0002928550670000022
representing a self-expression coefficient matrix, diag (Z) ═ 0 means that the diagonal elements of Z are zero.
Further, in step S101, a global robustness figure of the disease classification data is constructed by using the self-expression model to obtain an adjacency matrix a of the disease classification data, which specifically includes:
s201: and (3) carrying out minimization solving on the formula (1) by utilizing a Lagrange multiplier method to obtain a formula (2):
Figure GDA0002928550670000023
in the formula (2)
Figure GDA0002928550670000024
Expressing the F norm of the matrix, wherein alpha is a regularization coefficient and is a preset value;
s202: and (3) solving a partial derivative of Z in the formula (2) and analyzing to obtain a formula (3):
Z=(XXT+αIN)-1XXT (3)
in the formula (3), INAn identity matrix representing N dimensions;
s203: taking Z as the estimation of the weight of the edge of the global robustness graph, constructing an adjacent matrix A of disease classification data according to the formula (3), wherein the formula (4) is as follows:
Figure GDA0002928550670000031
further, in step S102, a random graph convolution model output H is calculated according to the adjacency matrix a, as shown in equation (5):
H=σ(ΛXW) (5)
in the formula (5), the reaction mixture is,
Figure GDA0002928550670000032
represents the output of the random graph convolution, and sigma represents the nonlinear activation function;
Figure GDA0002928550670000033
a hidden layer convolution kernel generated randomly; l is the number of hidden layer neurons generated randomly,
Figure GDA0002928550670000034
representing a normalized graph laplacian matrix with self-loops,
Figure GDA0002928550670000035
Figure GDA0002928550670000036
is composed of
Figure GDA0002928550670000037
The degree matrix of (c) is,
Figure GDA0002928550670000038
Aiji, j is the ith row and jth element of the matrix a, i, j being 1,2,3.. N.
Further, the nonlinear activation function is any one of a Sigmoid function, a Tanh function, a ReLU function, and a leak ReLU function.
Further, in step S103, in combination with the extreme learning machine, calculating an output layer weight β of the graph convolution extreme learning machine according to the random graph convolution model output H, specifically:
the random graph convolution model output H is divided into two types, namely graph convolution hidden layer output H of marked samplesΓOutput H of the convolution of unlabeled samplesu
Figure GDA0002928550670000039
A category matrix representing the labeled sample; n is a radical ofΓRepresents the number of labeled samples in the disease data set, C is the number of categories of the disease data set, and constructs an objective function as shown in equation (6):
HΓβ=YΓ (6)
and (3) solving equation (6) by ridge regression optimization, wherein equation (7) shows that:
Figure GDA00029285506700000310
the output layer weight β in equation (7) is subjected to partial derivation to obtain equation (8):
Figure GDA00029285506700000311
in the formula (8), λ is a nonnegative regularization coefficient, which is a preset value;
and (3) making the formula (8) equal to 0, and solving to obtain the output layer weight beta of the graph volume positive limit learning machine as the formula (9):
Figure GDA0002928550670000041
in the formula (9), ILIs an L-dimensional identity matrix.
Further, in step S104, the unlabeled disease classification data is classified by using the output layer weight β of the graph positive limit learning machine, as shown in equation (10):
Yu=Huβ (10)
Yuadopting the label matrix to be predicted after the rapid semi-supervised classification method based on the chart positive limit learning machine is adopted for the unlabelled disease classification data;
get YuThe subscript corresponding to the maximum activation value of each disease classification data sample in the set serves as the final predictive label.
The invention has the beneficial effects that: introducing a graph convolution network to replace a hidden layer in the extreme learning machine method to form a brand-new graph convolution positive extreme learning machine model; the model can process non-European graph structure data, such as generalization to the fields of disease classification, biological information, chemical medicine and the like, and can simultaneously keep the quick learning speed and the universal approximant capability of an extreme learning machine.
Drawings
FIG. 1 is a flow chart of a fast semi-supervised classification method based on a graph volume active limit learning machine according to the present invention;
FIG. 2 is a schematic diagram of a graph book active limit learning machine model of a fast semi-supervised classification method based on the graph book active limit learning machine of the present invention;
FIG. 3 is a graph showing the results of an experiment according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present invention provides an architecture diagram of a fast semi-supervised classification method based on a graph volume active limit learning machine, which specifically includes:
s101: constructing a self-expression model of the disease classification data, and constructing a global robustness map of the disease classification data by using the self-expression model to obtain an adjacent matrix A of the disease classification data;
s102: calculating a random graph convolution model output H according to the adjacency matrix A;
s103: calculating the output layer weight beta of the graph convolution extreme learning machine according to the output H of the random graph convolution model by combining the extreme learning machine;
s104: classifying the unlabeled disease classification data by using the output layer weight beta of the graph convolution limit learning machine;
in step S101, a self-expression model of disease classification data is constructed, as shown in formula (1):
XTZ=XT,s.t.,diag(Z)=0 (1)
in the formula (1), the reaction mixture is,
Figure GDA0002928550670000051
representing an input set of disease classification data features; wherein N is the number of the disease classification data feature set samples, m is the dimension of the disease classification data feature set features,
Figure GDA0002928550670000052
representing a self-expression coefficient matrix, diag (Z) ═ 0 means that the diagonal elements of Z are zero.
In step S101, a global robustness map of the disease classification data is constructed by using the self-expression model to obtain an adjacency matrix a of the disease classification data, which specifically includes:
s201: and (3) carrying out minimization solving on the formula (1) by utilizing a Lagrange multiplier method to obtain a formula (2):
Figure GDA0002928550670000053
in the formula (2)
Figure GDA0002928550670000054
Expressing the F norm of the matrix, wherein alpha is a regularization coefficient and is a preset value;
s202: and (3) solving a partial derivative of Z in the formula (2) and analyzing to obtain a formula (3):
Z=(XXT+αIN)-1XXT (3)
in the formula (3), INAn identity matrix representing N dimensions;
s203: taking Z as the estimation of the weight of the edge of the global robustness graph, constructing an adjacent matrix A of disease classification data according to the formula (3), wherein the formula (4) is as follows:
Figure GDA0002928550670000055
in step S102, a random graph convolution model output H is calculated according to the adjacency matrix a, as shown in equation (5):
H=σ(ΛXW) (5)
in the formula (5), the reaction mixture is,
Figure GDA0002928550670000061
represents the output of the random graph convolution, and sigma represents the nonlinear activation function;
Figure GDA0002928550670000062
a hidden layer convolution kernel generated randomly; l is the number of hidden layer neurons generated randomly,
Figure GDA0002928550670000063
representing a normalized graph laplacian matrix with self-loops,
Figure GDA0002928550670000064
Figure GDA0002928550670000065
is composed of
Figure GDA0002928550670000066
The degree matrix of (c) is,
Figure GDA0002928550670000067
Aiji, j is the ith row and jth element of the matrix a, i, j being 1,2,3.. N.
The nonlinear activation function is any one of a Sigmoid function, a Tanh function, a ReLU function and a Leaky ReLU function.
In step S103, calculating an output layer weight β of the graph convolution limit learning machine according to the random graph convolution model output H in combination with the limit learning machine, specifically:
the random graph convolution model output H is divided into two types, namely graph convolution hidden layer output H of marked samplesΓOutput H of the convolution of unlabeled samplesu
Figure GDA0002928550670000068
A category matrix representing the labeled sample; n is a radical ofΓRepresents the number of labeled samples in the disease data set, C is the number of categories of the disease data set, and constructs an objective function as shown in equation (6):
HΓβ=YΓ (6)
and (3) solving equation (6) by ridge regression optimization, wherein equation (7) shows that:
Figure GDA0002928550670000069
the output layer weight β in equation (7) is subjected to partial derivation to obtain equation (8):
Figure GDA00029285506700000610
in the formula (8), λ is a nonnegative regularization coefficient, which is a preset value;
and (3) making the formula (8) equal to 0, and solving to obtain the output layer weight beta of the graph volume positive limit learning machine as the formula (9):
Figure GDA00029285506700000611
in the formula (9), ILIs an L-dimensional identity matrix.
In step S104, unlabeled disease classification data is classified by using the output layer weight β of the graph positive limit learning machine, as shown in equation (10):
Yu=Huβ (10)
Yuadopting the rapid semi-supervised classification method based on the chart positive limit learning machine for unlabelled disease classification data and then obtaining a label matrix to be predicted;
get YuThe subscript corresponding to the maximum activation value of each disease classification data sample in the set serves as the final predictive label.
In the embodiment of the invention, in order to verify the effectiveness and superiority of the proposed graph volume active limit learning machine, the invention performs classification accuracy evaluation on a disease data set provided by 5 University of california at Irvine (UCI). Table 1 gives a detailed description of the 5 disease data. All comparison algorithms maintain the same parameter settings for fairness. Wherein, the number of L hidden layer neurons is 100, 5 samples are respectively selected from each category of each data set as labeled samples with labels, the rest samples are unlabeled samples needing prediction, and the main hyper-parameters lambda and alpha are selected from {2 }-6,2-4,2-2,2-1,20,21,22,24,26,28Choose the best one.
Table 1: attribute description of disease data sets
Figure GDA0002928550670000071
Referring to fig. 3, the classification accuracy Average (Average) calculated by the present invention and other comparative algorithms after 30 independent runs is shown in fig. 3. Wherein the symbols and
Figure GDA0002928550670000072
respectively, the confidence level of GCELM (i.e., the graph convolution limit learning model of the present invention, the same below) is superior or inferior to other comparison algorithms. W/T/L indicates that GCELM is superior to W sorted datasets, on average T sorted datasets and inferior to L sorted datasets compared to other algorithms. Average represents the accuracy and the corresponding variance value of the algorithm on the tested classification data set, and the larger the Average value is, the higher the classification precision of the disease data is, and the better the classification performance is. The experimental results show that GCELM has the following advantages:
GCELM generally has more superior semi-supervised classification performance than traditional ELM, GCN and the like;
GCELM does not need iteration, so that the training speed is high;
GCELM generalizes ELM in the traditional euclidean space to structured data domains and is therefore a more versatile semi-supervised learning framework.
The invention has the beneficial effects that: introducing a graph convolution network to replace a hidden layer in the extreme learning machine method to form a brand-new graph convolution positive extreme learning machine model; the model can process non-European graph structure data, such as generalization to the fields of disease classification, biological information, chemical medicine and the like, and can simultaneously keep the quick learning speed and the universal approximant capability of an extreme learning machine.
The features of the embodiments and embodiments described herein above may be combined with each other without conflict.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. A quick semi-supervised classification method based on a graph volume positive limit learning machine is characterized by comprising the following steps: the method specifically comprises the following steps:
s101: constructing a self-expression model of the disease classification data, and constructing a global robustness map of the disease classification data by using the self-expression model to obtain an adjacency matrix A of the input disease classification data;
s102: calculating a random graph convolution model output H according to the adjacency matrix A;
s103: calculating the output layer weight beta of the graph convolution extreme learning machine according to the output H of the random graph convolution model by combining the extreme learning machine;
s104: classifying the unlabeled disease classification data by using the output layer weight beta of the graph convolution limit learning machine;
in step S101, a self-expression model of the input disease classification data is constructed, as shown in formula (1):
XTZ=XT,s.t.,diag(Z)=0 (1)
in the formula (1), the reaction mixture is,
Figure FDA0002928550660000011
representing an input set of disease classification data features; wherein N is the number of the disease classification data feature set samples, m is the dimension of the disease classification data feature set features,
Figure FDA0002928550660000012
represents a self-expression coefficient matrix, diag (Z) ═ 0 represents that the diagonal element of Z is zero;
in step S101, a global robustness map of the disease classification data is constructed by using the self-expression model to obtain an adjacency matrix a of the disease classification data, which specifically includes:
s201: and (3) carrying out minimization solving on the formula (1) by utilizing a Lagrange multiplier method to obtain a formula (2):
Figure FDA0002928550660000013
in the formula (2)
Figure FDA0002928550660000014
Expressing the F norm of the matrix, wherein alpha is a regularization coefficient and is a preset value;
s202: and (3) solving a partial derivative of Z in the formula (2) and analyzing to obtain a formula (3):
Z=(XXT+αIN)-1XXT (3)
in the formula (3), INAn identity matrix representing N dimensions;
s203: taking Z as the estimation of the weight of the edge of the global robustness graph, constructing an adjacent matrix A of disease classification data according to the formula (3), wherein the formula (4) is as follows:
Figure FDA0002928550660000015
in step S102, a random graph convolution model output H is calculated according to the adjacency matrix a, as shown in equation (5):
H=σ(ΛXW) (5)
in the formula (5), the reaction mixture is,
Figure FDA0002928550660000021
represents the output of the random graph convolution, and sigma represents the nonlinear activation function;
Figure FDA0002928550660000022
a hidden layer convolution kernel generated randomly; l is the number of hidden layer neurons generated randomly,
Figure FDA0002928550660000023
representing a normalized graph laplacian matrix with self-loops,
Figure FDA0002928550660000024
Figure FDA0002928550660000025
is composed of
Figure FDA0002928550660000026
The degree matrix of (c) is,
Figure FDA0002928550660000027
Aijis the ith row and jth element of the matrix a, i, j ═ 1,2,3.. N; in step S103, calculating an output layer weight β of the graph convolution limit learning machine according to the random graph convolution model output H in combination with the limit learning machine, specifically:
the random graph convolution model output H is divided into two types, namely graph convolution hidden layer output H of marked samplesΓOutput H of the convolution of unlabeled samplesu
Figure FDA0002928550660000028
A category matrix representing the labeled sample; n is a radical ofΓRepresents the number of labeled samples in the disease data set, C is the number of categories of the disease data set, and constructs an objective function as shown in equation (6):
HΓβ=YΓ (6)
and (3) solving equation (6) by ridge regression optimization, wherein equation (7) shows that:
Figure FDA0002928550660000029
the output layer weight β in equation (7) is subjected to partial derivation to obtain equation (8):
Figure FDA00029285506600000210
in the formula (8), λ is a nonnegative regularization coefficient, which is a preset value;
and (3) making the formula (8) equal to 0, and solving to obtain the output layer weight beta of the graph volume positive limit learning machine as the formula (9):
Figure FDA00029285506600000211
in the formula (9), ILIs an L-dimensional identity matrix.
2. The rapid semi-supervised classification method based on the graph volume active limit learning machine as claimed in claim 1, wherein: the nonlinear activation function is any one of a Sigmoid function, a Tanh function, a ReLU function and a Leaky ReLU function.
3. The rapid semi-supervised classification method based on the graph volume active limit learning machine as claimed in claim 1, wherein: in step S104, unlabeled disease classification data is classified by using the output layer weight β of the graph positive limit learning machine, as shown in equation (10):
Yu=Huβ (10)
Yuadopting the label matrix to be predicted after the rapid semi-supervised classification method based on the chart positive limit learning machine is adopted for the unlabelled disease classification data;
get YuThe subscript corresponding to the maximum activation value of each disease classification data sample in the set serves as the final predictive label.
CN202010341727.1A 2020-04-27 2020-04-27 Rapid semi-supervised classification method based on picture volume active limit learning machine Active CN111626332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010341727.1A CN111626332B (en) 2020-04-27 2020-04-27 Rapid semi-supervised classification method based on picture volume active limit learning machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010341727.1A CN111626332B (en) 2020-04-27 2020-04-27 Rapid semi-supervised classification method based on picture volume active limit learning machine

Publications (2)

Publication Number Publication Date
CN111626332A CN111626332A (en) 2020-09-04
CN111626332B true CN111626332B (en) 2021-03-30

Family

ID=72271773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010341727.1A Active CN111626332B (en) 2020-04-27 2020-04-27 Rapid semi-supervised classification method based on picture volume active limit learning machine

Country Status (1)

Country Link
CN (1) CN111626332B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992184A (en) * 2015-07-02 2015-10-21 东南大学 Multiclass image classification method based on semi-supervised extreme learning machine
CN110045015A (en) * 2019-04-18 2019-07-23 河海大学 A kind of concrete structure Inner Defect Testing method based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10126141B2 (en) * 2016-05-02 2018-11-13 Google Llc Systems and methods for using real-time imagery in navigation
AU2019267454A1 (en) * 2018-05-06 2021-01-07 Strong Force TX Portfolio 2018, LLC Methods and systems for improving machines and systems that automate execution of distributed ledger and other transactions in spot and forward markets for energy, compute, storage and other resources
US11164038B2 (en) * 2018-08-16 2021-11-02 Uber Technologies, Inc. Imagery evidence matching system
CN110288088A (en) * 2019-06-28 2019-09-27 中国民航大学 Semi-supervised width study classification method based on manifold regularization and broadband network
CN110717390A (en) * 2019-09-05 2020-01-21 杭州电子科技大学 Electroencephalogram signal classification method based on graph semi-supervised width learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992184A (en) * 2015-07-02 2015-10-21 东南大学 Multiclass image classification method based on semi-supervised extreme learning machine
CN110045015A (en) * 2019-04-18 2019-07-23 河海大学 A kind of concrete structure Inner Defect Testing method based on deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"Extreme Learning Machine Based on Evolutionary Multi-objective Optimization";Yaoming Cai等;《Singapore: Springer Singapore》;20171109;第420-435页 *
"Hierarchical ensemble of Extreme Learning Machine";Yaoming Cai等;《Pattern Recognition Letters》;20181201;第116卷;第101-106页 *
"一种卷积神经网络和极限学习机相结合的人脸识别方法";余丹等;《数据采集与处理》;20160930;第31卷(第5期);第996-1003页 *
"基于CNN与RoELM的图像分类算法研究";王攀;《计算机与数字工程》;20190331;第47卷(第3期);第666-671页 *
"基于TL1范数约束的子空间聚类方法";李海洋等;《电子与信息学报》;20171031;第39卷(第10期);第2428-2436页 *
"极限学习机前沿进展与趋势";徐睿等;《计算机学报》;20190731;第42卷(第7期);第1640-1670页 *

Also Published As

Publication number Publication date
CN111626332A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
Alom et al. The history began from alexnet: A comprehensive survey on deep learning approaches
JP2021524099A (en) Systems and methods for integrating statistical models of different data modality
CN113705772A (en) Model training method, device and equipment and readable storage medium
CN112966114B (en) Literature classification method and device based on symmetrical graph convolutional neural network
Long et al. Predicting protein phosphorylation sites based on deep learning
Makantasis et al. Rank-r fnn: A tensor-based learning model for high-order data classification
Tripathi et al. Image classification using small convolutional neural network
Ling et al. An intelligent sampling framework for multi-objective optimization in high dimensional design space
Günen et al. Analyzing the contribution of training algorithms on deep neural networks for hyperspectral image classification
CN110263808B (en) Image emotion classification method based on LSTM network and attention mechanism
Du et al. Model-based trajectory inference for single-cell rna sequencing using deep learning with a mixture prior
Schupbach et al. Quantifying uncertainty in neural network ensembles using u-statistics
Fan et al. Surrogate-assisted evolutionary neural architecture search with network embedding
Lebbah et al. BeSOM: Bernoulli on self-organizing map
CN111626332B (en) Rapid semi-supervised classification method based on picture volume active limit learning machine
Vo-Ho et al. Meta-Learning of NAS for Few-shot Learning in Medical Image Applications
CN109614581A (en) The Non-negative Matrix Factorization clustering method locally learnt based on antithesis
Heidari et al. Graph convolutional networks
Veerabadran et al. Adaptive recurrent vision performs zero-shot computation scaling to unseen difficulty levels
Farokhmanesh et al. Deep neural networks regularization using a combination of sparsity inducing feature selection methods
Konstantinidis et al. Kernel learning with tensor networks
Horn et al. Predicting pairwise relations with neural similarity encoders
Murata et al. Modularity optimization as a training criterion for graph neural networks
Jain Optimization of regularization and early stopping to reduce overfitting in recognition of handwritten characters
Narayanan et al. Overview of Recent Advancements in Deep Learning and Artificial Intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant