CN113705635A - Semi-supervised width learning classification method and equipment based on adaptive graph - Google Patents

Semi-supervised width learning classification method and equipment based on adaptive graph Download PDF

Info

Publication number
CN113705635A
CN113705635A CN202110921323.4A CN202110921323A CN113705635A CN 113705635 A CN113705635 A CN 113705635A CN 202110921323 A CN202110921323 A CN 202110921323A CN 113705635 A CN113705635 A CN 113705635A
Authority
CN
China
Prior art keywords
input data
supervised
nodes
semi
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110921323.4A
Other languages
Chinese (zh)
Inventor
郭宇
熊钰
姜沛林
张玉龙
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110921323.4A priority Critical patent/CN113705635A/en
Publication of CN113705635A publication Critical patent/CN113705635A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

A semi-supervised width learning classification method and equipment based on an adaptive graph are disclosed, wherein the classification method comprises the following steps: firstly, performing random weight mapping on input data, storing the mapped features in feature nodes, then expanding the feature nodes to enhancement nodes through similar nonlinear feature mapping, and finally combining the feature nodes and the enhancement nodes to form a feature mapping matrix of the input data; learning a similarity matrix by using input data and a feature mapping matrix of the input data based on semi-supervised learning of a popular regularization frame, and simultaneously deducing an unknown label to obtain a loss function; and for the proposed loss function, solving a local optimal solution for each variable, and performing iterative optimization to complete semi-supervised classification. The method performs classification by jointly optimizing the feature extraction process and the adaptive graph structure learning process based on the sparse self-encoder, and improves the stability and performance of the algorithm.

Description

Semi-supervised width learning classification method and equipment based on adaptive graph
Technical Field
The invention belongs to the field of machine learning, and relates to a semi-supervised width learning classification method and equipment based on an adaptive graph.
Background
The semi-supervised learning method based on the graph is a research hotspot in semi-supervised learning. The central idea is to explore the pairwise affinities between data points in order to infer the label for a given unlabeled data. It can model the global dataset as a graph, where vertices represent data, edges represent pairwise similarities, and a larger edge weight indicates a greater probability of two data points becoming the same label. The traditional semi-supervised learning method based on the graph is divided into two stages: 1) constructing an affinity matrix; 2) the label of the unknown data is inferred. The essence of this is label propagation based on a manifold regularization framework that passes labeled sample information to unlabeled samples using the geometric distribution of the graph.
The width learning system is an efficient incremental learning system and is based on a random vector function to link a neural network. The structure is simple, and the device is composed of feature nodes, enhancement nodes and output weights. The feature nodes and the enhanced nodes can effectively extract features from the data and maintain the effectiveness of the system on the data. The output weights associate each node with the target matrix. In addition, in order to ensure the modeling efficiency of the width learning system, the output weight value can be obtained by a pseudo-inverse method. In addition to the output weights, other weights and biases in the width learning system are randomly generated. In addition, an incremental learning algorithm is added into the width learning system, so that the network can be quickly reconstructed when the network is expanded in a large range, and a retraining process is not needed. Thus, the breadth learning system architecture is well suited for modeling and learning in a time-varying large data environment.
Currently, traditional graph-based semi-supervised learning approaches simply introduce a manifold regularization framework for label propagation. Considering the construction of the similarity matrix and the inference of unknown tags as two separate stages, the correlation between the similarity matrix and the tag information cannot be considered, and the similarity matrix is not updated after initialization. And the classification learning is directly carried out through the input data, and deep feature extraction is not carried out on the input data. Therefore, the processing is not accurate, the result is not stable, and the construction of an efficient iterative algorithm is not facilitated.
Disclosure of Invention
The invention aims to provide a semi-supervised width learning classification method and equipment based on an adaptive graph, which aim to solve the problems in the prior art, label inference is carried out on unknown label data by utilizing the correlation between a similar matrix and label information, a self-adaptive and optimal neighborhood is distributed to each data point according to a local distance to learn the data similar matrix, the learning similar matrix is continuously updated according to input data, a characteristic matrix and the label information, and the stability and the performance of an algorithm are improved.
In order to achieve the purpose, the invention has the following technical scheme:
a semi-supervised width learning classification method based on an adaptive graph comprises the following steps:
firstly, performing random weight mapping on input data, storing the mapped features in feature nodes, then expanding the feature nodes to enhancement nodes through similar nonlinear feature mapping, and finally combining the feature nodes and the enhancement nodes to form a feature mapping matrix of the input data;
learning a similarity matrix by using input data and a feature mapping matrix of the input data based on semi-supervised learning of a popular regularization frame, and simultaneously deducing an unknown label to obtain a loss function;
and for the proposed loss function, solving a local optimal solution for each variable, and performing iterative optimization to complete semi-supervised classification.
As a preferred scheme of the present invention, the expression for performing random weight mapping on the input data and storing the mapped features in the feature nodes is as follows:
Figure BDA0003207511970000021
wherein X is input data and weight WeiAnd offset betaeiIn the form of a matrix of random weights,
Figure BDA0003207511970000022
is a linear transformation;
the expression for extending feature nodes to enhancement nodes by similar nonlinear feature mapping is as follows:
Hj=ξj(ZnWhjhj),j=1,2,...,m
in the formula, ZnFor random linear mapping of input data, weight WhjAnd offset betahjIs a random weight matrix, ξjIs a non-linear transformation;
the expression of the feature mapping matrix for forming the input data by combining the feature nodes and the enhanced nodes is as follows:
A=[Zn|Hm]
in the formula, ZnFor random linear mapping of input data, HmA non-linear feature map extended by feature nodes.
As a preferred embodiment of the invention, the input data is taken from a given semi-supervised data set { X epsilon R(l +u)*d,Y∈R(l+u)*cX is divided into two parts, respectively labeled data sets
Figure BDA0003207511970000031
And unlabeled data set
Figure BDA0003207511970000032
Figure BDA0003207511970000033
First l behavior of YlAnd the other rows are all 0, wherein l and u are the number of marked data and unmarked data respectively, d is the dimension of the input data, and c is the number of types of the input data.
As a preferred embodiment of the present invention, the expression of the loss function is as follows:
Figure BDA0003207511970000034
Figure BDA0003207511970000035
fi=a(xi)β,i=1,...l+u
S1=1,S≥0
in the formula, a (x)i) Is about xiIs the output coefficient, F is the label matrix, F is the feature mapping vector ofiIs about xiλ is a trade-off parameter, θ represents a further constraint on β, eiIs about the training sample xiTraining error vectors of (2); ciIs a regularization parameter used to represent a trade-off between minimization of training error and maximization of margin distance; gamma denotes a pair
Figure BDA0003207511970000036
Further constraints of (2); s is a similarity matrix, LsIs a graph Laplace matrix, LS=D-(ST+ S)/2, D is the diagonal matrix.
As a preferred aspect of the present invention, when S is fixed, the loss function with respect to β is:
Figure BDA0003207511970000037
Figure BDA0003207511970000038
fi=a(xi)β,i=1,...l+u
its unconstrained lagrange form is:
Figure BDA0003207511970000041
in the formula, Y is the enhanced training target, wherein the first row is the label of the marked data, the rest rows are all 0, C is the diagonal matrix, wherein the first diagonal elements are CiI 1.. l, the remaining diagonal elements are all 0.
As a preferred aspect of the present invention, when β is fixed, the loss function with respect to S is:
Figure BDA0003207511970000042
Figure BDA0003207511970000043
fi=a(xi)β,i=1,...l+u
S1=1,S≥0
for each i, its unconstrained lagrange form is:
Figure BDA0003207511970000044
in the formula, eta and betaiIn order to be a lagrange multiplier,
Figure BDA0003207511970000045
wherein the content of the first and second substances,
Figure BDA0003207511970000046
Figure BDA0003207511970000047
the invention also provides a semi-supervised width learning classification system based on the self-adaptive graph, which comprises the following steps:
the characteristic mapping matrix acquisition module is used for mapping the random weight of the input data, storing the mapped characteristics in characteristic nodes, expanding the characteristic nodes to enhanced nodes through similar nonlinear characteristic mapping, and finally combining the characteristic nodes and the enhanced nodes to form a characteristic mapping matrix of the input data;
the loss function acquisition module is used for learning a similarity matrix by using input data and a feature mapping matrix of the input data based on semi-supervised learning of a popular regularization frame, and simultaneously deducing an unknown label to obtain a loss function;
and the iterative optimization module is used for solving a local optimal solution for each variable according to the proposed loss function, performing iterative optimization and finishing semi-supervised classification.
The invention also provides a terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the semi-supervised width learning classification method based on the adaptive graph when executing the computer program.
The invention also proposes a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for semi-supervised width learning classification based on adaptive maps.
Compared with the prior art, the invention has at least the following beneficial effects: firstly, feature extraction is carried out on input data through a sparse self-encoder based on a width learning system, random weight mapping is carried out on the input data, the mapped features are stored in feature nodes, then the feature nodes are expanded to enhancement nodes through similar nonlinear feature mapping, and finally the feature nodes and the enhancement nodes are combined to form a feature mapping matrix of the input data, so that the extracted features are ensured to have sparsity and representativeness. In the learning of the similarity matrix, based on semi-supervised learning of a popular regularization frame, the similarity matrix is learned by using input data and a feature mapping matrix of the input data, meanwhile, an unknown label is deduced, a loss function is obtained, the data similarity matrix is learned by distributing an adaptive and optimal neighborhood for each data point according to local distance, label deduction is carried out on the unknown label data while the similarity matrix is learned, the correlation between the similarity matrix and label information is ensured, and the fact that the learning similarity matrix is updated continuously according to the input data, the feature matrix and the label information is achieved. In semi-supervised classification learning, the similarity matrix and the output coefficient matrix of width learning are updated iteratively according to the local optimal solution, so that the stability and the performance of the algorithm are improved.
Detailed Description
The present invention will be described in further detail with reference to examples.
The invention provides a semi-supervised width learning classification method based on an adaptive graph, which comprises the following steps of:
given a semi-supervised dataset { X ∈ R(l+u)*d,Y∈R(l+u)*cX is divided into two parts, respectively labeled data sets
Figure BDA0003207511970000051
And unlabeled data set
Figure BDA0003207511970000052
First l behavior of YlAnd the other rows are all 0, wherein l and u are the number of marked data and unmarked data respectively, d is the dimension of the input data, and c is the number of types of the input data.
S1, firstly, performing random weight mapping on input data, storing the mapped features in feature nodes, then expanding the feature nodes to enhanced nodes through similar nonlinear feature mapping, and finally combining the feature nodes and the enhanced nodes to form a feature mapping matrix of the input data;
the expression for mapping the random weight of the input data and storing the mapped features in the feature nodes is as follows:
Figure BDA0003207511970000061
wherein X is input data and weight WeiAnd offset betaeiIs a matrix of random weights of appropriate dimensions,
Figure BDA0003207511970000062
usually a linear transformation;
the expression for extending feature nodes to enhancement nodes by similar nonlinear feature mapping is as follows:
Hj=ξj(ZnWhjhj),j=1,2,...,m
in the formula, ZnFor random linear mapping of input data, weight WhjAnd offset betahjIs a random weight matrix of appropriate dimension, ξjTypically a non-linear transformation;
the expression of the feature mapping matrix for forming the input data by combining the feature nodes and the enhanced nodes is as follows:
A=[Zn|Hm]
in the formula, ZnFor random linear mapping of input data, HmA non-linear feature map extended by feature nodes.
S2, in the semi-supervised width learning classification based on the self-adaptive graph, based on the semi-supervised learning of the popular regularization frame, learning a similarity matrix by using input data and a feature mapping matrix of the input data, and simultaneously deducing an unknown label to obtain a loss function;
the expression of the loss function is as follows:
Figure BDA0003207511970000063
Figure BDA0003207511970000064
fi=a(xi)β,i=1,...l+u
S1=1,S≥0
where β is the output coefficient and θ represents a correlation | β |2To a further constraint of CiIs a regularization parameter representing a trade-off between minimization of training errors and maximization of marginal distance, λ is a trade-off parameter, F is a label matrix, F is a regularization parameteriIs about xiThe label vector of, a (x)i) Is about xiFeature mapping vector of eiIs about the training sample xiOf the training error vector, gamma denotes a pair
Figure BDA0003207511970000071
In (1) isAnd (5) one-step constraint. S is a similarity matrix, LSIs defined as the graph Laplacian matrix, LS=D-(ST+ S)/2, D is the diagonal matrix.
When S is fixed, the loss function for β is:
Figure BDA0003207511970000072
Figure BDA0003207511970000073
fi=a(xi)β,i=1,...l+u
further, the unconstrained lagrange form is:
Figure BDA0003207511970000074
wherein Y is an enhanced training target, wherein the first row is a label of marked data, the rest rows are all 0, C is a diagonal matrix, and the first diagonal elements are CiI 1.. l, the remaining diagonal elements are all 0.
When β is fixed, the loss function for S is:
Figure BDA0003207511970000075
Figure BDA0003207511970000076
fi=a(xi)β,i=1,...l+u
S1=1,S≥0
further, for each i, the unconstrained lagrange form is:
Figure BDA0003207511970000077
wherein, eta and betaiIn order to be a lagrange multiplier,
Figure BDA0003207511970000078
wherein the content of the first and second substances,
Figure BDA0003207511970000079
Figure BDA00032075119700000710
and S3, optimizing the proposed objective function in an iterative optimization mode, solving a local optimal solution for each variable of the proposed loss function, and further performing iterative optimization, wherein the semi-supervised classification problem can be efficiently completed by designing an iterative algorithm.
The invention also provides a semi-supervised width learning classification system based on the self-adaptive graph, which comprises the following steps:
the characteristic mapping matrix acquisition module is used for mapping the random weight of the input data, storing the mapped characteristics in characteristic nodes, expanding the characteristic nodes to enhanced nodes through similar nonlinear characteristic mapping, and finally combining the characteristic nodes and the enhanced nodes to form a characteristic mapping matrix of the input data;
the loss function acquisition module is used for learning a similarity matrix by using input data and a feature mapping matrix of the input data based on semi-supervised learning of a popular regularization frame, and simultaneously deducing an unknown label to obtain a loss function;
and the iterative optimization module is used for solving a local optimal solution for each variable according to the proposed loss function, performing iterative optimization and finishing semi-supervised classification.
The invention also provides a terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the semi-supervised width learning classification method based on the adaptive graph when executing the computer program.
The invention also proposes a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for semi-supervised width learning classification based on adaptive maps.
The computer program may be partitioned into one or more modules/units stored in the memory and executed by the processor to perform the adaptive graph-based semi-supervised width learning classification method of the present invention.
The terminal can be a desktop computer, a notebook, a palm computer, a cloud server and other computing equipment, and can also be a processor and a memory. The processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc. The memory may be used to store computer programs and/or modules, and the processor may implement the various functions of the adaptive graph-based semi-supervised width learning classification system by executing or executing the computer programs and/or modules stored in the memory, and invoking the data stored in the memory.
Examples
The semi-supervised width learning classification method based on the self-adaptive graph is utilized to classify and verify the accuracy:
the method comprises the following steps: and loading a data set { X, Y }, and initializing the feature node and the enhancement node.
Step two: s is initialized by the closed-form solution of the similarity matrix with X.
Step three: according to L ═ D- (S + S)T) The laplacian matrix L is initialized by/2.
Step four: the feature mapping matrix a for X is calculated.
Step five: and optimizing beta according to the Laplace matrix L and the feature mapping matrix A.
Step six: and (4) performing iterative updating on the S and the L by adopting a CAN algorithm.
Step seven: and repeating the fifth step to the sixth step until the convergence of the S, L and beta.
Step eight: according to fi=a(xi) β, i ═ 1.. l + u the classification results were calculated.
Step nine: and calculating the classification Accuracy (ACC) according to the classification result.
Tables 1 and 2 are the experimental results of the semi-supervised width learning classification method based on the adaptive graph of the present invention on the public data set. In tables 1 and 2, the last column is the result of the inventive semi-supervised classification of a data set, the third column is the result of the algorithm SS-ELM, the fourth column is the result of the algorithm SS-HELM, and the fifth column is the result of the algorithm SS-BLS. In both tables, the best results are shown in bold for each data set.
The algorithm is tested on 5 public data sets and compared with other excellent semi-supervised classification algorithms, and the result can verify the effectiveness of the self-adaptive graph-based semi-supervised width learning classification method.
TABLE 1
Figure BDA0003207511970000101
TABLE 2
Figure BDA0003207511970000102
The semi-supervised width learning classification method based on the adaptive graph performs semi-supervised classification on data based on adaptive graph learning and width learning, firstly performs feature extraction on input data based on a sparse self-coding principle of width learning to ensure that extracted features have sparsity and representativeness, secondly learns a data similarity matrix by distributing self-adaptive and optimal neighborhood to each data point according to local distance in the learning of the similarity matrix, and performs label inference on unknown label data while learning the similarity matrix to ensure the correlation between the similarity matrix and label information. And finally, continuously updating the learning similarity matrix according to the input data, the feature matrix and the label information. In semi-supervised classification learning, the similar matrix and the output coefficient matrix of the width learning system are updated iteratively according to the local optimal solution, so that the stability and the performance of the algorithm are improved.
The above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the technical solution of the present invention, and it should be understood by those skilled in the art that the technical solution can be modified and replaced by a plurality of simple modifications and replacements without departing from the spirit and principle of the present invention, and the modifications and replacements also fall into the protection scope covered by the claims.

Claims (9)

1. A semi-supervised width learning classification method based on an adaptive graph is characterized by comprising the following steps:
firstly, performing random weight mapping on input data, storing the mapped features in feature nodes, then expanding the feature nodes to enhancement nodes through similar nonlinear feature mapping, and finally combining the feature nodes and the enhancement nodes to form a feature mapping matrix of the input data;
learning a similarity matrix by using input data and a feature mapping matrix of the input data based on semi-supervised learning of a popular regularization frame, and simultaneously deducing an unknown label to obtain a loss function;
and for the proposed loss function, solving a local optimal solution for each variable, and performing iterative optimization to complete semi-supervised classification.
2. The adaptive graph-based semi-supervised width learning classification method according to claim 1, wherein the input data is subjected to random weight mapping, and the expression for storing the mapped features in feature nodes is as follows:
Figure FDA0003207511960000011
wherein X is input data and weight WeiAnd offset betaeiIn the form of a matrix of random weights,
Figure FDA0003207511960000012
is a linear transformation;
the expression for extending feature nodes to enhancement nodes by similar nonlinear feature mapping is as follows:
Hj=ξj(ZnWhjhj),j=1,2,...,m
in the formula, ZnFor random linear mapping of input data, weight WhjAnd offset betahjIs a random weight matrix, ξjIs a non-linear transformation;
the expression of the feature mapping matrix for forming the input data by combining the feature nodes and the enhanced nodes is as follows:
A=[Zn|Hm]
in the formula, ZnFor random linear mapping of input data, HmA non-linear feature map extended by feature nodes.
3. The adaptive-graph-based semi-supervised width learning classification method as claimed in claim 2, wherein the input data is taken from a given semi-supervised data set { X e R ∈ R(l+u)*d,Y∈R(l+u)*cX is divided into two parts, respectively labeled data sets
Figure FDA0003207511960000013
And unlabeled data set
Figure FDA0003207511960000014
First l behavior of YlThe remaining rows are all 0, where l and u are the number of marked and unmarked data, respectively, d is the dimension of the input data, and c isThe number of types of input data.
4. The adaptive-graph-based semi-supervised width learning classification method according to claim 1, wherein the expression of the loss function is as follows:
Figure FDA0003207511960000021
Figure FDA0003207511960000022
fi=a(xi)β,i=1,...l+u
S1=1,S≥0
in the formula, a (x)i) Is about xiIs the output coefficient, F is the label matrix, F is the feature mapping vector ofiIs about xiλ is a trade-off parameter, θ represents a further constraint on β, eiIs about the training sample xiTraining error vectors of (2); ciIs a regularization parameter used to represent a trade-off between minimization of training error and maximization of margin distance; gamma denotes a pair
Figure FDA0003207511960000023
Further constraints of (2); s is a similarity matrix, LSIs a graph Laplace matrix, LS=D-(ST+ S)/2, D is the diagonal matrix.
5. The adaptive-map-based semi-supervised width learning classification method according to claim 4, wherein when S is fixed, the loss function regarding β is:
Figure FDA0003207511960000024
Figure FDA0003207511960000025
fi=a(xi)β,i=1,...l+u
its unconstrained lagrange form is:
Figure FDA0003207511960000026
where Y is the enhanced training objective, where the first 1 row is the label of the labeled data, the remaining rows are all 0, C is the diagonal matrix, where the first l diagonal elements are CiI 1.. l, the remaining diagonal elements are all 0.
6. The adaptive-map-based semi-supervised width learning classification method according to claim 4, wherein when β is fixed, a loss function with respect to S is as follows:
Figure FDA0003207511960000031
Figure FDA0003207511960000032
fi=a(xi)β,i=1,...1+u
S1=1,S≥0
for each i, its unconstrained lagrange form is:
Figure FDA0003207511960000033
in the formula, eta and betaiIn order to be a lagrange multiplier,
Figure FDA0003207511960000034
wherein the content of the first and second substances,
Figure FDA0003207511960000035
Figure FDA0003207511960000036
7. a semi-supervised width learning classification system based on an adaptive graph, comprising:
the characteristic mapping matrix acquisition module is used for mapping the random weight of the input data, storing the mapped characteristics in characteristic nodes, expanding the characteristic nodes to enhanced nodes through similar nonlinear characteristic mapping, and finally combining the characteristic nodes and the enhanced nodes to form a characteristic mapping matrix of the input data;
the loss function acquisition module is used for learning a similarity matrix by using input data and a feature mapping matrix of the input data based on semi-supervised learning of a popular regularization frame, and simultaneously deducing an unknown label to obtain a loss function;
and the iterative optimization module is used for solving a local optimal solution for each variable according to the proposed loss function, performing iterative optimization and finishing semi-supervised classification.
8. A terminal device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized in that: the processor, when executing the computer program, implements the steps of the adaptive map-based semi-supervised width learning classification method according to any one of claims 1 to 6.
9. A computer-readable storage medium storing a computer program, characterized in that: the computer program when being executed by a processor realizes the steps of the adaptive map-based semi-supervised width learning classification method according to any one of claims 1 to 6.
CN202110921323.4A 2021-08-11 2021-08-11 Semi-supervised width learning classification method and equipment based on adaptive graph Pending CN113705635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110921323.4A CN113705635A (en) 2021-08-11 2021-08-11 Semi-supervised width learning classification method and equipment based on adaptive graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110921323.4A CN113705635A (en) 2021-08-11 2021-08-11 Semi-supervised width learning classification method and equipment based on adaptive graph

Publications (1)

Publication Number Publication Date
CN113705635A true CN113705635A (en) 2021-11-26

Family

ID=78652367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110921323.4A Pending CN113705635A (en) 2021-08-11 2021-08-11 Semi-supervised width learning classification method and equipment based on adaptive graph

Country Status (1)

Country Link
CN (1) CN113705635A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409073A (en) * 2022-10-31 2022-11-29 之江实验室 I/Q signal identification-oriented semi-supervised width learning method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009571A (en) * 2017-11-16 2018-05-08 苏州大学 A kind of semi-supervised data classification method of new direct-push and system
CN110288088A (en) * 2019-06-28 2019-09-27 中国民航大学 Semi-supervised width study classification method based on manifold regularization and broadband network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009571A (en) * 2017-11-16 2018-05-08 苏州大学 A kind of semi-supervised data classification method of new direct-push and system
CN110288088A (en) * 2019-06-28 2019-09-27 中国民航大学 Semi-supervised width study classification method based on manifold regularization and broadband network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409073A (en) * 2022-10-31 2022-11-29 之江实验室 I/Q signal identification-oriented semi-supervised width learning method and device

Similar Documents

Publication Publication Date Title
CN110532417B (en) Image retrieval method and device based on depth hash and terminal equipment
Hsieh et al. A divide-and-conquer method for sparse inverse covariance estimation
Sun et al. Manifold-preserving graph reduction for sparse semi-supervised learning
Zhang et al. Gaussian mixture model clustering with incomplete data
WO2022105108A1 (en) Network data classification method, apparatus, and device, and readable storage medium
CN103488662A (en) Clustering method and system of parallelized self-organizing mapping neural network based on graphic processing unit
US20200159810A1 (en) Partitioning sparse matrices based on sparse matrix representations for crossbar-based architectures
CN110288088A (en) Semi-supervised width study classification method based on manifold regularization and broadband network
CN113705633A (en) Semi-supervised kernel width classification learning method and device based on adaptive graph
Jing et al. AutoRSISC: Automatic design of neural architecture for remote sensing image scene classification
Hong et al. Variational gridded graph convolution network for node classification
CN113705674B (en) Non-negative matrix factorization clustering method and device and readable storage medium
CN113705635A (en) Semi-supervised width learning classification method and equipment based on adaptive graph
Li et al. Noise-aware clustering based on maximum correntropy criterion and adaptive graph regularization
CN109614581B (en) Non-negative matrix factorization clustering method based on dual local learning
CN109145111B (en) Multi-feature text data similarity calculation method based on machine learning
Yoon et al. EiGLasso: Scalable estimation of Cartesian product of sparse inverse covariance matrices
Yi et al. Inner product regularized nonnegative self representation for image classification and clustering
CN112926658B (en) Image clustering method and device based on two-dimensional data embedding and adjacent topological graph
CN115862751A (en) Quantum chemistry property calculation method for updating polymerization attention mechanism based on edge features
CN116306951A (en) Quantum computing method and device, electronic equipment and storage medium
Yang et al. Robust landmark graph-based clustering for high-dimensional data
Zhong et al. RPCA-induced self-representation for subspace clustering
Dong et al. Discriminative analysis dictionary learning with adaptively ordinal locality preserving
Le et al. Proximity multi-sphere support vector clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination