CN112270345A - Clustering algorithm based on self-supervision dictionary learning - Google Patents

Clustering algorithm based on self-supervision dictionary learning Download PDF

Info

Publication number
CN112270345A
CN112270345A CN202011118690.2A CN202011118690A CN112270345A CN 112270345 A CN112270345 A CN 112270345A CN 202011118690 A CN202011118690 A CN 202011118690A CN 112270345 A CN112270345 A CN 112270345A
Authority
CN
China
Prior art keywords
clustering
dictionary learning
network
self
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011118690.2A
Other languages
Chinese (zh)
Other versions
CN112270345B (en
Inventor
杨博
刘诗仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202011118690.2A priority Critical patent/CN112270345B/en
Priority claimed from CN202011118690.2A external-priority patent/CN112270345B/en
Publication of CN112270345A publication Critical patent/CN112270345A/en
Application granted granted Critical
Publication of CN112270345B publication Critical patent/CN112270345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a clustering algorithm based on self-supervision dictionary learning, which applies a self-supervision technology on the basis of deep dictionary learning, firstly carries out sparse representation on data through a deep dictionary learning network and constructs a similarity matrix, then clustering modules respectively linked on a sparse representation layer label the data by utilizing the similarity matrix to form pseudo labels and a classification network module to realize classification operation on the data, and a classification result is compared with the pseudo labels obtained by clustering to construct self-supervision loss so as to realize supervision on the dictionary learning network. The invention provides a solution for fully utilizing the inherent characteristics of the unlabeled data in the deep dictionary learning training process, and utilizes the obtained result to constrain the learning process, thereby optimizing the whole deep dictionary learning network and simultaneously improving the performance of dictionary learning.

Description

Clustering algorithm based on self-supervision dictionary learning
Technical Field
The invention belongs to the technical field of knowledge representation, and relates to a clustering algorithm based on self-supervision dictionary learning.
Background
With the rapid development of computer technology and the internet, complex high-dimensional data information grows exponentially, and the problem of how to acquire, compress, store, transmit and analyze data attracts attention of a large number of scholars. Dictionary learning and sparse representation are one of methods for finding potential feature representation of complex data, are applied to the fields of computer vision, machine learning and the like, and achieve excellent results. Clustering is used as an important branch in the field of unsupervised learning, is one of important tasks for processing high-dimensional data, and has application value in the fields of computer vision, biological information and the like.
Dictionary learning and sparse representation solving or sparse coding problems are continuously concerned by the academic world and the industrial world, many researchers have similar attempts to the dictionary learning and clustering tasks, Pablo Sprechmann et al propose a Cross-Incoherence item redefining sparse representation standard, define dictionaries for each data category in the clustering process, construct a continuously learning dictionary and clustering framework, and improve sparse representation and clustering effectiveness.
As the amount of data continues to increase, the corresponding processing requirements continue to increase. In consideration of solving two problems of calculation, Sujit Kumar Sahoo et al researches and discovers that a K-SVD dictionary learning method combining K-means clustering and Singular Value Decomposition (Singular Value Decomposition) cannot keep any structural sparsity because the Singular Value Decomposition interferes with sparse coding and unit norm of atoms, but can calculate with less resources because of a sequential algorithm; the MOD (method of Optimal orientations) algorithm can not only keep the structural sparsity, but also simplify the K-means algorithm, and is similar to a parallel popularization of K-means clustering; because the SEQUENTIAL algorithm needs less computing resources, an SKG algorithm (SEQUENTIAL GENERALIZA TITION OF K-MEANS) is provided to replace the MOD algorithm, and the computing speed is improved.
Disclosure of Invention
The invention aims to provide a clustering algorithm based on self-supervision dictionary learning, which has the characteristics of realizing the comparison of pseudo labels obtained by clustering and classified labels obtained by classification, constructing self-supervision loss and realizing self-supervision effect.
The technical scheme adopted by the invention is that a clustering algorithm based on self-supervision dictionary learning is implemented according to the following steps:
step 1, pre-training a deep dictionary learning network;
and 2, training a self-supervision dictionary learning network.
The invention is also characterized in that:
in the step 1, the deep dictionary learning network is in a linear network structure from input data to an output dictionary, the deep dictionary learning network adopts the idea of training and learning layer by layer, and single-layer dictionary learning is composed of input nodes and an output sparse representation layer;
the structure of the self-supervision dictionary learning network in the step 2 is formed by a classification module and a clustering module which are linked by sparse network layers in the deep dictionary learning network and the deep dictionary learning network.
The clustering module obtains a clustering result of a data sample by adopting a spectral clustering method based on graph theory, the clustering result is used as a pseudo label of a data set, a clustering output result is converted into a corresponding k-dimensional vector, k is the number of clustering clusters and corresponds to a classification network, and the result of the clustering module is used as a training target of the classification network;
the classification module structure is two fully-connected layers, after the classification module structure is linked to a sparse representation layer for deep dictionary learning, a clustering result is taken as a training target, and data are classified and used for supervising feature extraction and a dictionary learning network;
the spectral clustering adopted by the clustering module utilizes a similarity matrix W obtained by a dictionary learning network to calculate a degree matrix D, namely the sum of elements of each row of the similarity matrix, and then calculates a Laplace matrix S:
S=D-W
and arranging the eigenvalues in the Laplace matrix S from large to small, calculating eigenvectors corresponding to the first K eigenvalues, and clustering the eigenvectors by using a K-means algorithm to obtain K clustering clusters, namely a clustering cluster result.
Step 1, inputting training data into an untrained neural network to represent the input data as sparsely as possible as a target, and comparing the input constructed by utilizing a dictionary and sparse representation with the original data to be used as a loss function L of a deep dictionary learning network*Training is carried out on a GPU, parameters of a deep dictionary learning network are stored, and the method is implemented according to the following steps:
step 1.1, preprocessing data, namely performing decoloring and downsampling on an image;
step 1.2, building a deep dictionary learning network by block debugging, function encapsulation and category integration;
step 1.3, testing the deep dictionary learning network obtained in the step 2, inputting test data into the network endowed with relevant parameters, and testing whether an original image can be reconstructed or not;
and step 1.4, inputting training data, training the deep dictionary neural network on the GPU, adjusting specific parameters of the deep dictionary neural network, finally obtaining corresponding dictionary-based sparse representation, and storing network parameters.
Step 2, obtaining sparse representation of data through a deep dictionary learning network and constructing a similarity matrix among samples, then using the result generated by a clustering module executed on a CPU through the sparse representation at the present stage as a pseudo label, simultaneously using the pseudo label as a training target to carry out classification operation on the data through a classification network, and adjusting an automatic supervision loss function L by calculating the error between the classification result and an expected labelsRetraining, finishing the back propagation of the neural network, realizing self-supervision, improving the learning efficiency of the dictionary, and specifically implementing according to the following steps:
step 2.1, a clustering module in the self-supervision dictionary learning network obtains a clustering result of the sample by applying spectral clustering on the similarity matrix, a pseudo label formed by the clustering result is used as a training target of the classification network, and clustering is performed once in each learning process;
and 2.2, classifying the data by the classification network module, and constructing a classification loss by using the pseudo labels obtained in the step 2.1 and the obtained classification results to realize the self-supervision effect on sparse representation learning.
Step 2.1 is specifically carried out according to the following steps:
step 2.1.1, obtaining a similarity matrix W by calculating cosine similarity, arranging eigenvalues in the similarity matrix from large to small, taking the first k eigenvalues and calculating corresponding eigenvectors to form a vector matrix;
and 2.1.2, clustering the vector matrix obtained in the step 2.1.1 into clusters to obtain a clustering result, and taking the clustering result as a pseudo label in the learning process.
And 2.1.3, transmitting the clustering result to a classification module to be used as a training target of the classification network.
Step 2.2 is specifically carried out according to the following steps:
2.2.1, classifying the data transmitted from the sparse representation layer by a classification module;
and 2.2.2, the classification module obtains the pseudo labels generated by the clustering module while performing classification operation, and supervises the feature extraction of the dictionary learning network by calculating the error between the classification result and the expected labels.
Loss function L of deep dictionary learning network in step 1*Comprises the following steps:
Figure BDA0002731228230000041
in the formula (1), X is the original input data, Z is the representation on the dictionary, D1…DnCorresponding to each layer of the multi-layer dictionary.
The similarity matrix construction between samples in the deep dictionary learning network needs to adopt cosine similarity to calculate the distance between two sample points, and the distance is defined as:
Figure BDA0002731228230000042
in the formula (2), x and y are two vectors, and assuming that a total of N samples, an nxn similarity matrix W is obtained by calculation of the formula (2).
The self-supervision loss is composed of a classification result, a pseudo label error and a distance error between a sample point in a clustering module and the center of a cluster to which the sample point belongs, and a self-supervision loss function LsIs shown as
Figure BDA0002731228230000051
In formula (3): n is the number of samples, yiTo classify the output of the network model, qiIs polymerized intoPseudo label generated by class module, C (y)i) Is an index of a cluster to which the data belongs,
Figure BDA0002731228230000052
and indexing the corresponding clustering centers for the clustering clusters.
The invention has the beneficial effects that:
the invention applies the self-supervision method on the basis of the existing deep dictionary learning algorithm, realizes the organic integration of deep dictionary learning and clustering, constructs a unified self-supervision dictionary learning network, solves the problem that the sparse representation and the clustering process are separated in the existing method for performing dictionary learning and clustering by using the deep neural network, fully utilizes the internal characteristics of data, improves the clustering effect, and optimizes the whole dictionary learning process and the representation effect.
Drawings
FIG. 1 is a flow chart of the clustering algorithm based on the self-supervised dictionary learning according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses a clustering algorithm based on self-supervision dictionary learning, which is required to be implemented by fusing a self-supervision technology in a deep dictionary learning network and fully utilizing the internal characteristics of data in a training process to realize the improvement of the performance of the whole network. As shown in fig. 1, the method specifically comprises the following steps:
step 1, pre-training a deep dictionary learning network, namely firstly, the pre-training deep dictionary learning network, wherein the pre-training network is mainly used for inputting training data, determining the number of layers of the deep dictionary learning network through a layer-by-layer training mode, extracting data characteristics and ensuring that a clustering result generated by a subsequent clustering module is meaningful.
And 2, training the self-supervision dictionary learning network, reading the pre-trained deep dictionary learning network, linking a clustering module and a classification module after a sparse representation layer of the deep dictionary learning network, constructing a complete self-supervision dictionary learning network, training by taking a result parameter of the pre-training as the layer number of the deep dictionary learning network and taking the sparse representation of data as a target when the deep dictionary learning network is trained, performing dimensionality reduction on the data through the dictionary learning network, outputting the sparse representation and constructing a similarity matrix among samples, performing spectral clustering by using the similarity matrix by using the clustering module to form a clustering result as a pseudo label of the data, and taking the pseudo label in the learning process as a training target by using the classification network module to realize self-supervision and optimize the learning representation process.
In the step 1, the deep dictionary learning network is in a linear network structure from input data to an output dictionary, the deep dictionary learning network adopts the idea of training and learning layer by layer, and single-layer dictionary learning is composed of input nodes and an output sparse representation layer;
the structure of the self-supervision dictionary learning network in the step 2 is formed by a classification module and a clustering module which are linked by sparse network layers in the deep dictionary learning network and the deep dictionary learning network.
The clustering module obtains a clustering result of a data sample by adopting a spectral clustering method based on graph theory, the clustering result is used as a pseudo label of a data set, a clustering output result is converted into a corresponding k-dimensional vector, k is the number of clustering clusters and corresponds to a classification network, and the result of the clustering module is used as a training target of the classification network;
the classification module structure is two fully-connected layers, after the classification module structure is linked to a sparse representation layer for deep dictionary learning, a clustering result is taken as a training target, and data are classified and used for supervising feature extraction and a dictionary learning network;
the spectral clustering adopted by the clustering module utilizes a similarity matrix W obtained by a dictionary learning network to calculate a degree matrix D, namely the sum of elements of each row of the similarity matrix, and then calculates a Laplace matrix S:
S=D-W
and arranging the eigenvalues in the Laplace matrix S from large to small, calculating eigenvectors corresponding to the first K eigenvalues, and clustering the eigenvectors by using a K-means algorithm to obtain K clustering clusters, namely a clustering cluster result.
Step 1, inputting training data into an untrained neural network to represent the input data as sparsely as possible as a target, and utilizing a dictionary and a sparse representation structureComparing the built input with the original data as a loss function L of the deep dictionary learning network*Training is carried out on a GPU, parameters of a deep dictionary learning network are stored, and the method is implemented according to the following steps:
step 1.1, preprocessing data, namely performing decoloring and downsampling on an image;
step 1.2, building a deep dictionary learning network by block debugging, function encapsulation and category integration;
step 1.3, testing the deep dictionary learning network obtained in the step 2, inputting test data into the network endowed with relevant parameters, and testing whether an original image can be reconstructed or not;
and step 1.4, inputting training data, training the deep dictionary neural network on the GPU, adjusting specific parameters of the deep dictionary neural network, finally obtaining corresponding dictionary-based sparse representation, and storing network parameters.
Step 2, obtaining sparse representation of data through a deep dictionary learning network and constructing a similarity matrix among samples, then using the result generated by a clustering module executed on a CPU through the sparse representation at the present stage as a pseudo label, simultaneously using the pseudo label as a training target to carry out classification operation on the data through a classification network, and adjusting an automatic supervision loss function L by calculating the error between the classification result and an expected labelsRetraining, finishing the back propagation of the neural network, realizing self-supervision, improving the learning efficiency of the dictionary, and specifically implementing according to the following steps:
step 2.1, a clustering module in the self-supervision dictionary learning network obtains a clustering result of the sample by applying spectral clustering on the similarity matrix, a pseudo label formed by the clustering result is used as a training target of the classification network, and clustering is performed once in each learning process;
and 2.2, classifying the data by the classification network module, and constructing a classification loss by using the pseudo labels obtained in the step 2.1 and the obtained classification results to realize the self-supervision effect on sparse representation learning.
Step 2.1 is specifically carried out according to the following steps:
step 2.1.1, obtaining a similarity matrix W by calculating cosine similarity, arranging eigenvalues in the similarity matrix from large to small, taking the first k eigenvalues and calculating corresponding eigenvectors to form a vector matrix;
and 2.1.2, clustering the vector matrix obtained in the step 2.1.1 into clusters to obtain a clustering result, and taking the clustering result as a pseudo label in the learning process.
And 2.1.3, transmitting the clustering result to a classification module to be used as a training target of the classification network.
Step 2.2 is specifically carried out according to the following steps:
2.2.1, classifying the data transmitted from the sparse representation layer by a classification module;
and 2.2.2, the classification module obtains the pseudo labels generated by the clustering module while performing classification operation, and supervises the feature extraction of the dictionary learning network by calculating the error between the classification result and the expected labels.
Loss function L of deep dictionary learning network in step 1*Comprises the following steps:
Figure BDA0002731228230000081
in the formula (1), X is the original input data, Z is the representation on the dictionary, D1…DnCorresponding to each layer of the multi-layer dictionary.
The similarity matrix construction between samples in the deep dictionary learning network needs to adopt cosine similarity to calculate the distance between two sample points, and the distance is defined as:
Figure BDA0002731228230000082
in the formula (2), x and y are two vectors, and assuming that a total of N samples, an nxn similarity matrix W is obtained by calculation of the formula (2).
The self-supervision loss is composed of a classification result, a pseudo label error and a distance error between a sample point in a clustering module and the center of a cluster to which the sample point belongs, and a self-supervision loss functionLsIs shown as
Figure BDA0002731228230000091
In formula (3): n is the number of samples, yiTo classify the output of the network model, qiPseudo label generated for clustering module, C (y)i) Is an index of a cluster to which the data belongs,
Figure BDA0002731228230000092
and indexing the corresponding clustering centers for the clustering clusters.
The invention has the advantages that: the data characteristics can be well extracted by utilizing the deep dictionary learning network, so that enough information is provided for clustering to form pseudo labels of data, the classification network utilizes the pseudo labels as training targets, classification loss is constructed, the self-supervision effect is realized, the internal characteristics of the data are fully utilized by the whole network, and the performance of the whole network is improved.

Claims (10)

1. A clustering algorithm based on self-supervision dictionary learning is characterized by being implemented according to the following steps:
step 1, pre-training a deep dictionary learning network;
and 2, training a self-supervision dictionary learning network.
2. The clustering algorithm based on the self-supervision dictionary learning according to claim 1 is characterized in that in the step 1, the deep dictionary learning network is in a linear network structure from input data to an output dictionary, the deep dictionary learning network adopts the idea of training and learning layer by layer, and single-layer dictionary learning is composed of input nodes and an output sparse representation layer;
the structure of the self-supervision dictionary learning network in the step 2 is formed by a classification module and a clustering module which are linked by sparse network layers in the deep dictionary learning network and the deep dictionary learning network.
3. The clustering algorithm based on the self-supervision dictionary learning according to the claim 2 is characterized in that the clustering module adopts a graph theory-based spectral clustering method to obtain clustering results of data samples, the clustering results are used as pseudo labels of a data set, clustering output results are converted into corresponding k-dimensional vectors, k is the number of clustering clusters, the clustering results correspond to a classification network, and the results of the clustering module are used as training targets of the classification network;
the classification module structure is two fully-connected layers, and after the classification module structure is linked to a sparse representation layer for deep dictionary learning, the data are classified by taking a clustering result as a training target and used for supervising feature extraction and a dictionary learning network;
the spectral clustering adopted by the clustering module utilizes a similarity matrix W obtained by a dictionary learning network to calculate a similarity matrix D, namely the sum of elements of each row of the similarity matrix, and then calculates a Laplace matrix S:
S=D-W
and arranging the eigenvalues in the Laplace matrix S from large to small, calculating eigenvectors corresponding to the first K eigenvalues, and clustering the eigenvectors by using a K-means algorithm to obtain K clustering clusters, namely a clustering cluster result.
4. The clustering algorithm based on the self-supervised dictionary learning as recited in claim 3, wherein the step 1 inputs training data into an untrained neural network to represent the input data as sparsely as possible as a target, and the input constructed by the dictionary and the sparse representation is compared with the original data to serve as a loss function L of the deep dictionary learning network*Training is carried out on a GPU, parameters of a deep dictionary learning network are stored, and the method is implemented according to the following steps:
step 1.1, preprocessing data, namely performing decoloring and downsampling on an image;
step 1.2, building a deep dictionary learning network by block debugging, function encapsulation and category integration;
step 1.3, testing the deep dictionary learning network obtained in the step 2, inputting test data into the network endowed with relevant parameters, and testing whether an original image can be reconstructed or not;
and step 1.4, inputting training data, training the deep dictionary neural network on the GPU, adjusting specific parameters of the deep dictionary neural network, finally obtaining corresponding dictionary-based sparse representation, and storing network parameters.
5. The clustering algorithm based on the self-supervision dictionary learning according to claim 4 is characterized in that in the step 2, sparse representation of data is obtained through a deep dictionary learning network and a similarity matrix among samples is constructed, then a result generated by a clustering module executed on a CPU is used as a pseudo label by utilizing sparse representation at the present stage, meanwhile, the classification network carries out classification operation on the data by taking the pseudo label as a training target, and the self-supervision loss function L is adjusted by calculating an error between a classification result and an expected labelsRetraining, finishing the back propagation of the neural network, realizing self-supervision, improving the learning efficiency of the dictionary, and specifically implementing according to the following steps:
step 2.1, a clustering module in the self-supervision dictionary learning network obtains a clustering result of a sample by applying spectral clustering on a similarity matrix, a pseudo label formed by the clustering result is used as a training target of a classification network, and clustering is performed once in each learning process;
and 2.2, classifying the data by the classification network module, and constructing a classification loss by using the pseudo labels obtained in the step 2.1 and the obtained classification results to realize the self-supervision effect on sparse representation learning.
6. The clustering algorithm based on the self-supervised dictionary learning according to claim 5, wherein the step 2.1 is implemented according to the following steps:
step 2.1.1, obtaining a similarity matrix W by calculating cosine similarity, arranging eigenvalues in the similarity matrix from large to small, taking the first k eigenvalues and calculating corresponding eigenvectors to form a vector matrix;
and 2.1.2, clustering the vector matrix obtained in the step 2.1.1 into clusters to obtain a clustering result, and taking the clustering result as a pseudo label in the learning process.
And 2.1.3, transmitting the clustering result to a classification module to be used as a training target of the classification network.
7. The clustering algorithm based on the self-supervised dictionary learning according to claim 4, wherein the step 2.2 is implemented specifically according to the following steps:
2.2.1, classifying the data transmitted from the sparse representation layer by a classification module;
and 2.2.2, the classification module obtains the pseudo labels generated by the clustering module while performing classification operation, and supervises the feature extraction of the dictionary learning network by calculating the error between the classification result and the expected labels.
8. The clustering algorithm based on the self-supervised dictionary learning as recited in claim 4, wherein the loss function L of the deep dictionary learning network of the step 1 is*Comprises the following steps:
Figure FDA0002731228220000031
in the formula (1), X is the original input data, Z is the representation on the dictionary, D1…DnCorresponding to each layer of the multi-layer dictionary.
9. The clustering algorithm based on the self-supervision dictionary learning according to claim 5 is characterized in that the construction of the similarity matrix between samples in the deep dictionary learning network requires the calculation of the distance between two sample points by cosine similarity, and is defined as:
Figure FDA0002731228220000041
in the formula (2), x and y are two vectors, and assuming that a total of N samples, an nxn similarity matrix W is obtained by calculation of the formula (2).
10. The clustering algorithm based on the self-supervised dictionary learning as recited in claim 5, wherein the self-supervised loss is composed of a classification result, a pseudo label error and a distance error between a sample point in a clustering module and a center of a cluster to which the sample point belongs, and the self-supervised loss function LsIs shown as
Figure FDA0002731228220000042
In formula (3): n is the number of samples, yiTo classify the output of the network model, qiPseudo label generated for clustering module, C (y)i) Is an index of a cluster to which the data belongs,
Figure FDA0002731228220000043
and indexing the corresponding clustering centers for the clustering clusters.
CN202011118690.2A 2020-10-19 Clustering algorithm based on self-supervision dictionary learning Active CN112270345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011118690.2A CN112270345B (en) 2020-10-19 Clustering algorithm based on self-supervision dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011118690.2A CN112270345B (en) 2020-10-19 Clustering algorithm based on self-supervision dictionary learning

Publications (2)

Publication Number Publication Date
CN112270345A true CN112270345A (en) 2021-01-26
CN112270345B CN112270345B (en) 2024-05-14

Family

ID=

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112964962A (en) * 2021-02-05 2021-06-15 国网宁夏电力有限公司 Power transmission line fault classification method
CN113516181A (en) * 2021-07-01 2021-10-19 北京航空航天大学 Characterization learning method of digital pathological image
CN114596420A (en) * 2022-03-16 2022-06-07 中关村科学城城市大脑股份有限公司 Laser point cloud modeling method and system applied to urban brain
CN117314900A (en) * 2023-11-28 2023-12-29 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method
CN117611931A (en) * 2024-01-23 2024-02-27 西南科技大学 Data classification method and system based on depth self-expression local block learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204444717U (en) * 2015-03-10 2015-07-08 西安工程大学 A kind of combined type hanger desk
CN108564012A (en) * 2018-03-29 2018-09-21 北京工业大学 A kind of pedestrian's analytic method based on characteristics of human body's distribution
WO2019226670A1 (en) * 2018-05-21 2019-11-28 Neurala, Inc. Systems and methods for deep neural networks on device learning (online and offline) with and without supervision
US20200257503A1 (en) * 2019-02-07 2020-08-13 Juyang Weng Auto-Programming for General Purposes and Auto-Programming Operating Systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204444717U (en) * 2015-03-10 2015-07-08 西安工程大学 A kind of combined type hanger desk
CN108564012A (en) * 2018-03-29 2018-09-21 北京工业大学 A kind of pedestrian's analytic method based on characteristics of human body's distribution
WO2019226670A1 (en) * 2018-05-21 2019-11-28 Neurala, Inc. Systems and methods for deep neural networks on device learning (online and offline) with and without supervision
US20210216865A1 (en) * 2018-05-21 2021-07-15 Neurala, Inc. Systems and methods for deep neural networks on device learning (online and offline) with and without supervision
US20200257503A1 (en) * 2019-02-07 2020-08-13 Juyang Weng Auto-Programming for General Purposes and Auto-Programming Operating Systems

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GONG YONGHONG等: "Unsupervised feature selection algorithm based on self-paced learning", 《JOURNAL OF COMPUTER APPLICATIONS》, pages 2858 - 63 *
刘诗仪等: "基于图信息的自监督多视角子空间聚类", 《计算机系统应用》, pages 377 - 381 *
肖成龙;张重鹏;王珊珊;张睿;王万里;魏宪;: "基于流形正则化与成对约束的深度半监督谱聚类算法", 系统科学与数学, no. 08, pages 3 - 19 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112964962A (en) * 2021-02-05 2021-06-15 国网宁夏电力有限公司 Power transmission line fault classification method
CN112964962B (en) * 2021-02-05 2022-05-20 国网宁夏电力有限公司 Power transmission line fault classification method
CN113516181A (en) * 2021-07-01 2021-10-19 北京航空航天大学 Characterization learning method of digital pathological image
CN113516181B (en) * 2021-07-01 2024-03-15 北京航空航天大学 Characterization learning method for digital pathological image
CN114596420A (en) * 2022-03-16 2022-06-07 中关村科学城城市大脑股份有限公司 Laser point cloud modeling method and system applied to urban brain
CN117314900A (en) * 2023-11-28 2023-12-29 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method
CN117314900B (en) * 2023-11-28 2024-03-01 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method
CN117611931A (en) * 2024-01-23 2024-02-27 西南科技大学 Data classification method and system based on depth self-expression local block learning
CN117611931B (en) * 2024-01-23 2024-04-05 西南科技大学 Data classification method and system based on depth self-expression local block learning

Similar Documents

Publication Publication Date Title
Cheng et al. Model compression and acceleration for deep neural networks: The principles, progress, and challenges
Ding et al. Extreme learning machine: algorithm, theory and applications
Zhou et al. Stacked extreme learning machines
CN109766277B (en) Software fault diagnosis method based on transfer learning and DNN
Hassantabar et al. SCANN: Synthesis of compact and accurate neural networks
CN113033309A (en) Fault diagnosis method based on signal downsampling and one-dimensional convolution neural network
CN113361664A (en) Image recognition system and method based on quantum convolution neural network
CN113535953B (en) Meta learning-based few-sample classification method
Zhou et al. Multiple kernel clustering with compressed subspace alignment
Aziguli et al. A robust text classifier based on denoising deep neural network in the analysis of big data
Li et al. Automatic design of machine learning via evolutionary computation: A survey
EP3847584A1 (en) System and method for synthesis of compact and accurate neural networks (scann)
Zhu et al. TCRAN: Multivariate time series classification using residual channel attention networks with time correction
Ni et al. Algorithm-hardware co-design for efficient brain-inspired hyperdimensional learning on edge
CN115062727A (en) Graph node classification method and system based on multi-order hypergraph convolutional network
CN108388918B (en) Data feature selection method with structure retention characteristics
CN117349311A (en) Database natural language query method based on improved RetNet
Wang et al. A convolutional neural network image classification based on extreme learning machine
CN112270345B (en) Clustering algorithm based on self-supervision dictionary learning
CN112270345A (en) Clustering algorithm based on self-supervision dictionary learning
Lyu et al. A survey of model compression strategies for object detection
Xia et al. Efficient synthesis of compact deep neural networks
Cheng et al. Research on feasibility of convolution neural networks for rock thin sections image retrieval
Hasan et al. Compressed neural architecture utilizing dimensionality reduction and quantization
Meddad et al. A hybrid face identification system using a compressed CNN in a big data environment for embedded devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant