CN117611931A - Data classification method and system based on depth self-expression local block learning - Google Patents

Data classification method and system based on depth self-expression local block learning Download PDF

Info

Publication number
CN117611931A
CN117611931A CN202410091515.0A CN202410091515A CN117611931A CN 117611931 A CN117611931 A CN 117611931A CN 202410091515 A CN202410091515 A CN 202410091515A CN 117611931 A CN117611931 A CN 117611931A
Authority
CN
China
Prior art keywords
matrix
representing
block
self
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410091515.0A
Other languages
Chinese (zh)
Other versions
CN117611931B (en
Inventor
张小乾
彭栎璠
王丽超
白克强
何有东
陈宇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202410091515.0A priority Critical patent/CN117611931B/en
Publication of CN117611931A publication Critical patent/CN117611931A/en
Application granted granted Critical
Publication of CN117611931B publication Critical patent/CN117611931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/7635Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks based on graphs, e.g. graph cuts or spectral clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a data classification method and a system based on deep self-expression local block learning, relates to the technical field of data classification, and solves the problems that the existing subspace method based on deep learning cannot fully consider the balance between the detail characteristics and the abstract characteristics of samples, so that a model cannot effectively learn the characteristic expression of the data and the model clustering result is inaccurate. The clustering performance of the model is greatly improved.

Description

Data classification method and system based on depth self-expression local block learning
Technical Field
The invention relates to the technical field of data classification, in particular to a data classification method and system based on deep self-expression local block learning.
Background
Image classification (Image Classification) is a problem of classifying image content, and utilizes a classification model to quantitatively analyze images, and divide the images or regions in the images into a plurality of categories to replace visual judgment of people. With the continuous development of information collection technology, the collected data is larger and larger in scale, and the existing subspace clustering method based on deep learning is capable of adaptively extracting the characteristics of the image data and carrying out subspace reconstruction by combining the deep learning and subspace clustering technology in the face of image data with high dimensionality and complex structure, so that the method has great advantages in processing the high-dimensional complex image data.
However, the current subspace method based on deep learning only utilizes the characteristic information of the deepest potential space, and the balance between the detail characteristics and the abstract characteristics of the image sample is not fully considered, so that useful information contained in some original image data may be lost. In addition, outliers present in the data can also negatively impact the generation of a clean block diagonal self-representation coefficient matrix, affecting self-representation learning. These problems can lead to models that are not able to learn the feature representation of the image data effectively, resulting in inaccurate results for image classification.
In view of the foregoing, the present application provides a data classification method and system based on deep self-expression local block learning, which solves the above-mentioned problems.
Disclosure of Invention
The purpose of the application is to provide a data classification method and a system based on depth self-expression local block learning, which solve the problems that the influence of abnormal values on self-expression learning in image classification and the existing subspace method based on deep learning cannot fully consider the balance between the detail characteristics and the abstract characteristics of an image sample, so that a model cannot effectively learn the characteristic expression of image data and the image classification result is inaccurate; the method combines the global structure information and the local structure information, learns and optimizes the first matrix together, performs cluster analysis based on the second matrix, and obtains a final classification result to solve the problems.
The application firstly provides a data classification method based on depth self-expression local block learning, which comprises the following steps: s1, acquiring image input data, and sequentially passing the image input data through a plurality of encoders and a plurality of corresponding decoders to obtain a plurality of first characteristics from shallow to deep in a potential space; s2, inputting a plurality of first features into a self-expression layer to learn the similarity relation among the plurality of first features, and obtaining a first matrix of each first feature; s3, stacking the first matrixes, and performing consensus representation learning on the first matrixes by using a convolution network to obtain second matrixes; s4, dividing the second matrix into a plurality of block matrixes, generating weights of the block matrixes according to the distribution condition of similar samples in the block matrixes, constraining each block matrix through the corresponding weights, recombining the constrained block matrixes into a new second matrix for updating the second matrix, and optimizing network parameters by taking the minimum total loss function as a target, and updating the second matrix again to obtain a final second matrix; s5, constructing a third matrix through the final second matrix, and executing a spectral clustering algorithm on the third matrix to obtain a data classification result;
wherein, step S4 includes: s41, cutting the second matrix in different sizes to obtain a plurality of block matrixes; s42, carrying out full-average pooling and normalization processing on each block matrix to obtain a weight corresponding to each block matrix; s43, multiplying the obtained weight with the blocking matrixes, and updating each blocking matrix; s44, recombining each updated blocking matrix to update the second matrix, and optimizing network parameters through the overall loss function to obtain a final second matrix.
By adopting the technical scheme, the first features in the image data are extracted from shallow to deep through a plurality of encoders, and the first features are fused to obtain the second matrix. The multiple encoders extract detailed features and abstract features of the image data from shallow to deep, and the obtained second matrix fully learns global structure information and local structure information of the image data, so that the original view is more accurately and comprehensively represented. In addition, the method and the device perform block weighted learning on the second matrix, perform self-weighting according to the distribution condition of the same type of samples in the block matrix, reduce the influence of abnormal data on the diagonal structure of the second matrix block, improve the clustering performance of the model and enable the image classification result to be more accurate.
In one possible embodiment, in step S2: the plurality of first features aim at minimizing the loss function of the self-expression layer, the similarity relation among the plurality of first features is learned, and the loss function of the self-expression layer is expressed as follows:
wherein,representing the loss function of the self-representing layer, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Representing F-norm, first term in the formula +.>Is regularized item, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features, the second term in the formula +.>Representing the first feature->Through network loss generated when the layer is represented by itself.
In one possible implementation, the overall loss function is expressed as follows:
wherein,representing the overall loss function, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Representing F-norm, < >>And->Trade-off parameters representing the degree of importance of controlling different terms, the first term in the formula representingReconstruction loss of image input data, second term and third term in the formula represent regularized loss and loss of self-represented layer, respectively, +.>Representing image input data, < >>Representing the reconstructed image input data, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features>Representing a first feature.
In one possible embodiment, the function of the third matrix is expressed as follows:
wherein,and C is the final second matrix.
In one possible implementation manner, S1, image input data is acquired, and the image input data sequentially passes through a plurality of encoders and a plurality of corresponding decoders, so as to obtain a plurality of first features of the image input data from shallow to deep in a potential space, wherein the number of the encoders and the decoders is three.
The application also provides a data classification system based on depth self-expression local block learning, comprising: the feature extraction module is used for acquiring image input data, and sequentially passing the image input data through a plurality of encoders and a plurality of corresponding decoders to obtain a plurality of first features from shallow to deep in a potential space; the self-expression module is used for inputting a plurality of first features into the self-expression layer to learn the similarity relation among the plurality of first features, so as to obtain a first matrix of each first feature; the consensus representation learning module is used for stacking all the first matrixes and carrying out consensus representation learning on all the first matrixes by utilizing a convolution network to obtain second matrixes; the self-expression block weighting module is used for dividing the second matrix into a plurality of block matrixes, generating the weight of the block matrixes according to the distribution condition of the similar samples in the block matrixes, restraining each block matrix by the corresponding weight, recombining the restrained block matrixes into a new second matrix, updating the second matrix, optimizing network parameters by taking the minimum total loss function as a target, and updating the second matrix again to obtain a final second matrix; the data classification module is used for constructing a third matrix through the final second matrix, and performing a spectral clustering algorithm on the third matrix to obtain a data classification result;
wherein the self-representative block weighting module comprises: the matrix segmentation module is used for cutting the second matrix in different sizes to obtain a plurality of block matrixes; the weight calculation module is used for carrying out full-average pooling and normalization processing on each block matrix to obtain a weight corresponding to each block matrix; the block matrix updating module is used for multiplying the obtained weight with the block matrix and updating each block matrix; and the matrix updating module is used for recombining each updated block matrix to update the second matrix, and optimizing network parameters through the overall loss function to obtain a final second matrix.
In one possible implementation manner, in the self-expression module, a plurality of first features aim at minimizing a loss function of the self-expression layer, and a similarity relationship between the plurality of first features is learned, where the loss function of the self-expression layer is expressed as follows:
wherein,representing the loss function of the self-representing layer, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Representing F-norm, first term in the formula +.>Is regularized item, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features, the second term in the formula +.>Representing the first feature->Through network loss generated when the layer is represented by itself.
In one possible implementation, the overall loss function in the self-representative block weighting module is expressed as follows:
wherein,representing the overall loss function, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Representing F-norm, < >>And->A trade-off parameter representing the degree of importance of the different terms, a first term in the formula representing the reconstruction loss of the image input data, a second term and a third term in the formula representing the regularization loss and the loss of the self-representing layer, respectively,/>Representing image input data, < >>Representing the reconstructed image input data, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features>Representing a first feature.
In one possible embodiment, the function of the third matrix in the data classification module is expressed as follows:
wherein,and C is the final second matrix.
In one possible implementation, the image input data is acquired, and the image input data sequentially passes through a plurality of encoders and a plurality of corresponding decoders, so as to obtain a plurality of first features of the image input data from shallow to deep in the potential space, wherein the encoders and the decoders are three.
Compared with the prior art, the application has the following beneficial effects: according to the scheme, the global structure information and the local structure information of the image data are learned through the self-representation weighting and the consensus representation learning of the block matrix, so that the similarity of the same type of samples in the global and local ranges is enhanced, and the accuracy of the same type of samples divided into the same subspace is improved. The clustering performance of the classification model is greatly improved, and the image data classification accuracy is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention. In the drawings:
FIG. 1 is a flow chart of a data classification method based on depth self-expression local block learning provided in embodiment 1 of the present invention;
fig. 2 is a network structure diagram for implementing a data classification method according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of learning a consensus representation provided in embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of self-representation weighting of a blocking matrix according to embodiment 1 of the present invention;
fig. 5 is a schematic structural diagram of a data classification system based on depth self-expression local block learning according to embodiment 2 of the present invention.
Description of the embodiments
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the examples and the accompanying drawings, and the exemplary embodiments of the present application and the descriptions thereof are only for explaining the present application and are not limiting the present application.
Embodiment 1 provides a data classification method based on depth self-representation local block learning. Referring to fig. 1, fig. 1 is a flowchart of a data classification method based on depth self-expression local block learning, and the method includes: s1, acquiring image input data, and sequentially passing the image input data through a plurality of encoders and a plurality of corresponding decoders to obtain a plurality of first characteristics from shallow to deep in a potential space; s2, inputting a plurality of first features into a self-expression layer to learn the similarity relation among the plurality of first features, and obtaining a first matrix of each first feature; s3, stacking the first matrixes, and performing consensus representation learning on the first matrixes by using a convolution network to obtain second matrixes; s4, dividing the second matrix into a plurality of block matrixes, generating weights of the block matrixes according to the distribution condition of similar samples in the block matrixes, constraining each block matrix through the corresponding weights, recombining the constrained block matrixes into a new second matrix for updating the second matrix, and optimizing network parameters by taking the minimum total loss function as a target, and updating the second matrix again to obtain a final second matrix; s5, constructing a third matrix through the final second matrix, and executing a spectral clustering algorithm on the third matrix to obtain a data classification result; wherein, step S4 includes: s41, cutting the second matrix in different sizes to obtain a plurality of block matrixes; s42, carrying out full-average pooling and normalization processing on each block matrix to obtain a weight corresponding to each block matrix; s43, multiplying the obtained weight with the blocking matrixes, and updating each blocking matrix; s44, recombining each updated blocking matrix to update the second matrix, and optimizing network parameters through the overall loss function to obtain a final second matrix.
It should be noted that the first feature may be understood as a potential feature, the first matrix may be understood as a self-representative coefficient matrix, the second matrix may be understood as a common self-representative coefficient matrix, and the third matrix may be understood as an affinity matrix.
Specifically, referring to fig. 2, fig. 2 is a network structure diagram for implementing a data classification method. In the view of figure 2,representing image input data, i.e., a matrix obtained by digitizing an input image; />Representing a reconstruction matrix; />Representing a first feature, namely a matrix of data points mapped non-linearly from a high-dimensional space into a low-dimensional potential space; />The first matrix is represented, i.e. the weight matrix of the data at the self-representing layer.
As can be seen in connection with fig. 2, step S1 is feature extraction by the encoder and decoder: inputting data to an image by a plurality of encodersPerforming dimension reduction operation, and extracting +.>The first feature of (2) is that the reduced-dimension image input data is obtained by the above-mentioned process>A representation of features in a potential space; in addition, in order to restrict the feature space to a reasonable range and to preserve important feature information of the image input data, the decoder adopts a symmetrical structure with the encoder. Step S2 is to self-represent each first feature: and learning the similarity relation among the plurality of first features in an unsupervised mode to obtain a first matrix of each first feature. Step S3 is consensus representation learning (consensus representation learning, CRL): as shown in fig. 3, to perform self-expression learning on image input data from a global and local perspective, the present method integrates the structural information of a plurality of first matrices to learn one second matrix (CCSM, consensus coefficient self-representation matrix). Step S4 is a block matrix self-representation weighting (self-represented block weigh)Surface, SBW): as shown in fig. 4, since abnormal data inevitably affects the clustering result, the second matrix is firstly partitioned, and then different weights are given to the second matrix according to the distribution condition of the similar samples in each matrix; then, the obtained second matrix is locally constrained by using self-expression block weighting, and the second matrix is updated; and finally, training network parameters with minimum overall loss function, and carrying out iterative updating to obtain a final second matrix. And S5, constructing a third matrix through the final second matrix, and clustering the obtained third matrix by adopting a spectral clustering algorithm to obtain a final image classification result.
It should be noted that, the method mainly has two improvements, one is consensus representation learning, extracting first features of image data from shallow to deep through a plurality of encoders, and fusing the first features to obtain a second matrix. The multiple encoders extract detailed features and abstract features of the image data from shallow to deep, the obtained second matrix fully learns global structure information and local structure information of the image data, and the original view is represented more accurately and comprehensively. Secondly, the blocking matrix self-expression weighting is performed, the scheme performs blocking weighting learning on the second matrix, and the self-weighting is performed according to the distribution condition of the same type of samples in the blocking matrix, so that the influence of abnormal data on the diagonal structure of the second matrix block can be reduced, the clustering performance of the model is improved, and the accuracy of image data classification is improved.
In one possible implementation, taking three encoders as an example, consensus represents learning: first, the image input data X sequentially passes through three encoders, and a first characteristic of X in potential space is obtained. Next, the first feature ++is obtained by taking the self-expression layer as an example of the fully-connected layer>Linear mapping to obtain a first matrix for each first feature. Each first matrix is then stacked. It is noted that in matrix stacking, the dimensions are set to zero for the purpose of adding multiple +.>This is arranged in order such that the structural information of the reconstructed data corresponds to the original data. Finally, in order to compensate the missing important characteristic information in the self-expression learning process, the convolution is utilized to carry out consensus expression learning on each first matrix so as to obtain a second matrix.
The blocking matrix self-represents weighting: the second matrix is first cut to different sizes. The global tie pooling (GAP) and normalization process is performed on each block matrix, that is, corresponding weights are given according to the situation that each block matrix contains the same type of sample number. If the number of the same type of samples in the blocking matrix is larger, the assigned weight is larger. And finally, multiplying the obtained weight with the blocking matrix to strengthen the correlation of the similar samples. After obtaining the self-weighted blocking matrix, recombining all the blocking matrices into a second matrix to update and optimize the original second matrix, and optimizing network parameters through the overall loss function to obtain a final second matrix.
The method aims at minimizing the network loss function to perform self-expression learning. In particular, self-representational is the most critical attribute in subspace clustering, and its purpose is to allow data in an affiliated subspace to be represented linearly by one of the data points. Assume image input dataBy->The encoder obtains a first characteristic representation of the dataThen->The reconstructed image input data is obtained by the decoder>. The loss function after reconstruction of the model is expressed as follows:
. (1)
after the image input data is subjected to the dimension reduction and feature extraction by the encoder, the feature representation of the potential space of the data is obtained。/>After passing the self-expression layer, exemplified by the fully connected layer, a matrix representation of the features of each self-expression layer is obtained>. The loss function of the model self-expression layer is expressed as follows:
. (2)
wherein,the constrained terms are represented to avoid trivial solutions. />Representing arbitrary regularization norms, ++>Representing F-norm, first term in the formula +.>Is a regularization term, and aims to obtain a parameter value of model minimization and reduce the complexity of the model. Second term in the formula->Representing a first feature representation matrix->Through network loss generated when the layer is represented by itself.
In summary, the loss function of the proposed network mainly consists of reconstruction loss, regularization loss and self-expression loss of data, and the overall loss function of the model is expressed as follows:
. (3)
wherein,and->Representing trade-off parameters controlling the degree of importance of different items. The reconstruction loss from the image input data corresponds to the first term in the formula. The regularization loss and the loss from the presentation layer correspond to the second and third terms in the formula, respectively.
Constructing a third matrix A from the final second matrix, i.e. And finally, performing a spectral clustering algorithm on the A to obtain a clustering result.
It can be understood that the method learns global structure information and local structure information of the data through self-representation weighting and consensus representation learning of the block matrix, so that the similarity of the same type of samples in the global and local ranges is enhanced, and the accuracy of dividing the same type of samples into the same subspace is improved. The clustering performance of the model is greatly improved.
Embodiment 2 provides a data classification system based on depth self-representative local block learning. Referring to fig. 5, fig. 5 is a schematic structural diagram of a data classification system based on depth self-representation local block learning, and the system corresponds to the method one by one. A data classification system, comprising: the feature extraction module is used for acquiring image input data, and sequentially passing the image input data through a plurality of encoders and a plurality of corresponding decoders to obtain a plurality of first features from shallow to deep in a potential space; the self-expression module is used for inputting a plurality of first features into the self-expression layer to learn the similarity relation among the plurality of first features, so as to obtain a first matrix of each first feature; the consensus representation learning module is used for stacking all the first matrixes and carrying out consensus representation learning on all the first matrixes by utilizing a convolution network to obtain second matrixes; the self-expression block weighting module is used for dividing the second matrix into a plurality of block matrixes, generating the weight of the block matrixes according to the distribution condition of the similar samples in the block matrixes, restraining each block matrix by the corresponding weight, recombining the restrained block matrixes into a new second matrix, updating the second matrix, optimizing network parameters by taking the minimum total loss function as a target, and updating the second matrix again to obtain a final second matrix; the data classification module is used for constructing a third matrix through the final second matrix, and performing a spectral clustering algorithm on the third matrix to obtain a data classification result; wherein the self-representative block weighting module comprises: the matrix segmentation module is used for cutting the second matrix in different sizes to obtain a plurality of block matrixes; the weight calculation module is used for carrying out full-average pooling and normalization processing on each block matrix to obtain a weight corresponding to each block matrix; the block matrix updating module is used for multiplying the obtained weight with the block matrix and updating each block matrix; and the matrix updating module is used for recombining each updated block matrix to update the second matrix, and optimizing network parameters through the overall loss function to obtain a final second matrix.
It should be noted that the present system designs a self-expression block weighting module. The module performs a self-weighting process on the learned first matrix. It is worth to say that, the system does not simply carry out weighting processing on each first matrix, but cuts the learned second matrix into independent block matrixes, and carries out self-weighting according to the distribution condition of the same type of samples in the matrix of the block matrixes so as to reduce the influence of abnormal data on the diagonal structure of the second matrix blocks, thereby improving the clustering performance of the model. In addition, the system also designs a consensus representation learning module. The module utilizes the encoder to perform feature extraction on data of different layers, and learns a second matrix by combining the feature information of different self-expression layers. The matrix may represent the original view more accurately and more comprehensively.
In one possible implementation manner, in the self-expression module, a plurality of first features aim at minimizing a loss function of the self-expression layer, and a similarity relationship between the plurality of first features is learned, where the loss function of the self-expression layer is expressed as follows:
wherein,representing the loss function of the self-representing layer, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Representing F-norm, first term in the formula +.>Is regularized item, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features, the second term in the formula +.>Representing the first feature->Through network loss generated when the layer is represented by itself.
In one possible implementation, the overall loss function in the self-representative block weighting module is expressed as follows:
wherein,representing the overall loss function, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Representing F-norm, < >>And->A trade-off parameter representing the degree of importance of the different terms, a first term in the formula representing the reconstruction loss of the image input data, a second term and a third term in the formula representing the regularization loss and the loss of the self-representing layer, respectively,/>Representing image input data, < >>Representing the reconstructed image input data, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features>Representing a first feature.
In one possible embodiment, the function of the third matrix in the data classification module is expressed as follows:
wherein,and C is the final second matrix.
In one possible implementation, the image input data is acquired, and the image input data sequentially passes through a plurality of encoders and a plurality of corresponding decoders, so as to obtain a plurality of first features of the image input data from shallow to deep in the potential space, wherein the encoders and the decoders are three.
Example 3 provides a performance verification test of the present method. In order to verify the clustering performance of the method, the algorithm flow shown in table 1 is executed on five reference data sets, namely Extended YaleB, ORL, umist, COIL and MNIST, and the detailed information of the data sets is shown in table 2. This method is hereinafter referred to as SLBL.
As can be seen from table 3, SLBL is compared with a number of conventional and deep subspace clustering methods. The traditional subspace clustering method mainly comprises the following steps: SSC, LRR, KSSC, EDSC. The subspace clustering method based on deep learning mainly comprises the following steps: GSA, DSC, RGRL, DLRSC, LDLRSC, TAGCSC. It should be noted that in the deep learning-based subspace clustering method, there are two variants of the two clustering methods, DSC and TAGCSC. They areIs provided with L 1 Depth subspace clustering method of norms (DSC-L 1 ) With L 2 Depth subspace clustering method of norms (DSC-L 2 ) With L 1 Relation-guided representation learning of norms (RGRL-L 1 ) With L 2 Relation-guided representation learning of norms (RGRL-L 2 ). Two general evaluation indexes are used to evaluate the clustering performance, namely Accuracy (ACC) and normalized mutual information (Normalized Mutual Information, NMI). The method achieves better results (bolded part in table 3) on both the ACC and NMI indexes. The detailed experimental results and analyses were as follows:
(1) Comparative experiments on five data sets:
extended Yaleb is a face image dataset, each image having a size of. The 38 face images were each illuminated under 64 illumination conditions, yielding 2432 images in total. The ORL dataset had 40 face samples, each with 10 different facial images. The size of each image is +.>There are a total of 400 images. The COIL20 dataset contained 20 objects and 72 images were taken for each object at 5 ° intervals for a total of 1440 images. The size of each image is +.>. The MNIST dataset contained 10 handwritten numbers, each having 100 photographs, for a total of 1000 images. The size of each image is +.>. The Umist dataset contained 20 face samples, each of which acquired 24 images of different poses, for a total of 480 images. Each image has a size of
Compared with classical DSC-Net, the method has 13.50 percent, 5.56 percent, 26.88 percent, 1.93 percent and 22.40 percent of ACC on ORL, COIL20, umist, extended Yaleb and MNIST data sets respectively, and 9.21 percent, 4.58 percent, 23.38 percent, 1.94 percent and 22.40 percent of NMI respectively. ACC was elevated by 13.25%, 0.21%, 9.17%, 6.95%, 24.60% compared to TAGCSC on the ORL, COIL20, umist, extended YaleB, MNIST datasets, respectively. NMI was raised by 6.71%, 5.01%, 4.93%, 27.10% on ORL, umist, extended Yaleb, MNIST datasets, respectively, and lowered by 0.58% on COIL20 datasets. This is because the degree of concentration of different clusters produced by the clustering of samples varies, i.e., one cluster is very dense and the other clusters are very sparse, and thus mutual information between them may be reduced. This verifies that SLBL greatly improves the clustering performance of the model.
(2) Second matrix CCSM partial cut contrast experiment:
the second matrix partial cut contrast experiment will be validated on five datasets, ORL, COIL20, umist, extended Yaleb and MNIST, and different shape divisions are performed on the second matrix to investigate the effect of the partial structural information in the matrix on network performance. Since the data sets are different in size, when the second matrix is partially cut, each matrix is divided into three different sizes according to the size of each data set, and the experimental results are shown in table 4. In this experiment, the method was evaluated using the general evaluation indices ACC and NMI, and the best results were bolded. This verifies that proper cutting of the second matrix can actually further enhance the clustering performance of the model.
(3) Second matrix CCSM comparative experiment:
in order to prove that the global structure information obtained when the feature information learned by different self-expression layers is integrated is different, different self-expression layers are respectively combined、/>、/>) And the method is evaluated using the evaluation indexes ACC and NMI. The experimental results are shown in table 5. In Table 5, < > is->、/>And->Respectively representing a matrix formed by combining the characteristic coefficients sequentially extracted from the image input data by the three self-encoders. Five comparative experiments were designed on the ORL, COIL20 dataset and the best experimental results were bolded. From the experimental results of ORL and COIL20 data sets, it can be seen that +.>、/>、/>The second matrix learned by combination is optimal, so that the clustering performance of the model can be effectively improved.
Table 1 specific algorithm flow
Table 2 dataset
Table 3 clustering accuracy vs (%)
TABLE 4 comparison of accuracy of first matrix partial cut experiments (%)
Table 5 second matrix accuracy contrast (%)
Note that: (),/>Represents the number of channels>Representing the size of the convolution kernel.
Taking ORL as an example, the flow at the time of model classification is as follows:
1. input sample x number channel x height x width%) Is->Filling 2 and convolution kernel +.>The convolution operation with 1 input channel number, 30 input channel number and 2 steps is performed on the convolution operation, the convolution operation is performed on the convolution operation with the Relu activation function, and the output image is +.>;/>
2. Filling it with 2, convolving it with a convolution kernelA convolution operation with an input channel number of 30, an output channel number of 30, a step number of 2,and performing a Relu activation function on the image, the output image is +.>
3. Filling it with 2, convolving it with a convolution kernelConvolution operation with 30 input channels, 50 output channels and 2 steps is performed, and the convolution operation is performed with a Relu activation function, and the output image is +.>
4. Invoking defined SelfExpression class pairsSelf-expression learning is performed, and reconstructed +.>Is thatAnd get updated +.>Is->
5. Invoking defined SelfExpression class pairsSelf-expression learning is performed, and reconstructed +.>Is thatAnd get updated +.>Is->
6. Invoking defined SelfExpression class pairsSelf-expression learning is performed, and reconstructed +.>Is thatAnd get updated +.>Is->
7. Reconstructed using view ()The shape of (2) becomes +.>It is convolved with a kernel +.>The deconvolution operation with 50 input channels, 30 output channels, 2 steps and 2 filling is performed, and the deconvolution operation is performed with a Relu activation function, and the output image is +.>
8. Convolving it intoThe deconvolution operation with input channel number of 30, output channel number of 30, step number of 2 and filling of 2 is performed, and the deconvolution operation is performed with Relu activation function, and the output image is +.>
9. Convolving it intoThe deconvolution operation with input channel number of 30, output channel number of 1, step number of 2 and filling of 2 is performed, and the deconvolution operation is performed with Relu activation function, and the output image is +.>
10. For a pair ofPerforming CRL operation, and outputting CCSM of +.>
11. SBW operation is carried out on the output CCSM, the output CCSM is that
12. And carrying out spectral clustering on the obtained product, and outputting a prediction result.
Wherein, the consensus represents the flow of learning the CRL as follows:
1. input deviceStacking them, the output CCSM is +.>
2. Adjusting its dimension using unscqueeze (), outputting CCSM as
3. Convolving it intoA convolution operation with a step number of 1 and a padding of 1, and an output CCSM of 1
The flow of the self-representative block weighted SBW is as follows:
1. adjusting the shape of an input CCSM to
2. Using SE attention mechanism to do thisAveraging pooling, adjusting shape by using view () operation, and finally realizing operation of reducing and generating weight by using Relu activation function and Sigmoid activation function respectively through two full connections, wherein the shape of output CCSM is +.>
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A data classification method based on depth self-representation local block learning, comprising:
s1, acquiring image input data, and sequentially passing the image input data through a plurality of encoders and a plurality of corresponding decoders to obtain a plurality of first characteristics from shallow to deep in a potential space;
s2, inputting a plurality of first features into a self-expression layer to learn the similarity relation among the plurality of first features, and obtaining a first matrix of each first feature;
s3, stacking the first matrixes, and performing consensus representation learning on the first matrixes by using a convolution network to obtain second matrixes;
s4, dividing the second matrix into a plurality of block matrixes, generating weights of the block matrixes according to the distribution condition of similar samples in the block matrixes, constraining each block matrix through the corresponding weights, recombining the constrained block matrixes into a new second matrix for updating the second matrix, and optimizing network parameters by taking the minimum total loss function as a target, and updating the second matrix again to obtain a final second matrix;
s5, constructing a third matrix through the final second matrix, and executing a spectral clustering algorithm on the third matrix to obtain a data classification result;
wherein, step S4 includes:
s41, cutting the second matrix in different sizes to obtain a plurality of block matrixes;
s42, carrying out full-average pooling and normalization processing on each block matrix to obtain a weight corresponding to each block matrix;
s43, multiplying the obtained weight with the blocking matrixes, and updating each blocking matrix;
s44, recombining each updated blocking matrix to update the second matrix, and optimizing network parameters through the overall loss function to obtain a final second matrix.
2. The method for classifying data based on depth self-expression local block learning according to claim 1, wherein in step S2: the plurality of first features aim at minimizing the loss function of the self-expression layer, the similarity relation among the plurality of first features is learned, and the loss function of the self-expression layer is expressed as follows:
wherein,representing the loss function of the self-representing layer, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Represents F-norm, the first term in the formula +.>Is regularized item, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features, the second term in the formula +.>Representing the first feature->Through network loss generated when the layer is represented by itself.
3. A method of classifying data based on depth self-representative local block learning as claimed in claim 1, wherein the overall loss function is expressed as follows:
wherein,representing the overall loss function, +.>Representing a mandatory constraint item->Representing an arbitrary regularization norm,representing F-norm, < >>And->A trade-off parameter representing the degree of importance of the different terms, a first term in the formula representing the reconstruction loss of the image input data, a second term and a third term in the formula representing the regularization loss and the loss of the self-representing layer, respectively,/>Representing image input data, < >>Representing the reconstructed image input data, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features>Representing a first feature.
4. A method of classifying data based on depth self-representative local block learning as claimed in claim 1, wherein the function of the third matrix is expressed as follows:
wherein,and C is the final second matrix.
5. The method for classifying data based on depth self-expression local block learning according to claim 1, wherein S1, obtaining image input data, and sequentially passing the image input data through a plurality of encoders and a plurality of corresponding decoders, so as to obtain a plurality of first features of the image input data from shallow to deep in a potential space, wherein the number of encoders and the number of decoders are three.
6. A depth self-representative local block learning-based data classification system, comprising:
the feature extraction module is used for acquiring image input data, and sequentially passing the image input data through a plurality of encoders and a plurality of corresponding decoders to obtain a plurality of first features from shallow to deep in a potential space;
the self-expression module is used for inputting a plurality of first features into the self-expression layer to learn the similarity relation among the plurality of first features, so as to obtain a first matrix of each first feature;
the consensus representation learning module is used for stacking all the first matrixes and carrying out consensus representation learning on all the first matrixes by utilizing a convolution network to obtain second matrixes;
the self-expression block weighting module is used for dividing the second matrix into a plurality of block matrixes, generating the weight of the block matrixes according to the distribution condition of the similar samples in the block matrixes, restraining each block matrix by the corresponding weight, recombining the restrained block matrixes into a new second matrix, updating the second matrix, optimizing network parameters by taking the minimum total loss function as a target, and updating the second matrix again to obtain a final second matrix;
the data classification module is used for constructing a third matrix through the final second matrix, and performing a spectral clustering algorithm on the third matrix to obtain a data classification result;
wherein the self-representative block weighting module comprises:
the matrix segmentation module is used for cutting the second matrix in different sizes to obtain a plurality of block matrixes;
the weight calculation module is used for carrying out full-average pooling and normalization processing on each block matrix to obtain a weight corresponding to each block matrix;
the block matrix updating module is used for multiplying the obtained weight with the block matrix and updating each block matrix;
and the matrix updating module is used for recombining each updated block matrix to update the second matrix, and optimizing network parameters through the overall loss function to obtain a final second matrix.
7. The depth self-representative local block learning-based data classification system according to claim 6, wherein in the self-representative module, a plurality of the first features are aimed at minimizing a loss function of a self-representative layer, and a similarity relationship between the plurality of the first features is learned, and the loss function of the self-representative layer is expressed as follows:
wherein,representing the loss function of the self-representing layer, +.>Representing a mandatory constraint item->Representing arbitrary regularization norms, ++>Represents F-norm, the first term in the formula +.>Is regularized item, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features, the second term in the formula +.>Representing the first feature->Through network loss generated when the layer is represented by itself.
8. The depth self-representative local block learning based data classification system of claim 6 wherein the overall loss function in the self-representative block weighting module is expressed as follows:
wherein,representing the overall loss function, +.>Representing a mandatory constraint item->Representing an arbitrary regularization norm,the F-norm is represented by the number,/>and->A trade-off parameter representing the degree of importance of the different terms, a first term in the formula representing the reconstruction loss of the image input data, a second term and a third term in the formula representing the regularization loss and the loss of the self-representing layer, respectively,/>Representing image input data, < >>Representing the reconstructed image input data, +.>Indicate->A first matrix of first features, +.>Representing the total number of first features>Representing a first feature.
9. The depth self-representative local block learning based data classification system of claim 6 wherein the function of the third matrix in the data classification module is expressed as follows:
wherein,and C is the final second matrix.
10. The depth self-representative local block learning based data classification system of claim 6 wherein image input data is acquired and sequentially passed through a plurality of encoders and a corresponding plurality of decoders to obtain a plurality of first features of the image input data from shallow to deep in potential space, wherein the number of encoders and decoders is three.
CN202410091515.0A 2024-01-23 2024-01-23 Data classification method and system based on depth self-expression local block learning Active CN117611931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410091515.0A CN117611931B (en) 2024-01-23 2024-01-23 Data classification method and system based on depth self-expression local block learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410091515.0A CN117611931B (en) 2024-01-23 2024-01-23 Data classification method and system based on depth self-expression local block learning

Publications (2)

Publication Number Publication Date
CN117611931A true CN117611931A (en) 2024-02-27
CN117611931B CN117611931B (en) 2024-04-05

Family

ID=89960232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410091515.0A Active CN117611931B (en) 2024-01-23 2024-01-23 Data classification method and system based on depth self-expression local block learning

Country Status (1)

Country Link
CN (1) CN117611931B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063687A1 (en) * 2013-08-30 2015-03-05 Siemens Aktiengesellschaft Robust subspace recovery via dual sparsity pursuit
CN107316050A (en) * 2017-05-19 2017-11-03 中国科学院西安光学精密机械研究所 Subspace self-expression model clustering method based on Cauchy loss function
CN109063757A (en) * 2018-07-20 2018-12-21 西安电子科技大学 It is diagonally indicated based on block and the multifarious multiple view Subspace clustering method of view
CN110414560A (en) * 2019-06-26 2019-11-05 武汉大学 A kind of autonomous Subspace clustering method for high dimensional image
CN111144463A (en) * 2019-12-17 2020-05-12 中国地质大学(武汉) Hyperspectral image clustering method based on residual subspace clustering network
CN112164067A (en) * 2020-10-12 2021-01-01 西南科技大学 Medical image segmentation method and device based on multi-mode subspace clustering
CN112270345A (en) * 2020-10-19 2021-01-26 西安工程大学 Clustering algorithm based on self-supervision dictionary learning
CN113111976A (en) * 2021-05-13 2021-07-13 华南理工大学 Depth subspace clustering method and system based on low-rank tensor self-expression
CN114529745A (en) * 2022-01-11 2022-05-24 山东师范大学 Missing multi-view subspace clustering method and system based on graph structure learning
CN114612671A (en) * 2022-02-21 2022-06-10 哈尔滨工业大学(深圳) Multi-view subspace clustering method, device, equipment and storage medium
CN114821142A (en) * 2022-04-26 2022-07-29 安徽工业大学芜湖技术创新研究院 Image clustering method and system based on depth subspace fuzzy clustering
CN114897053A (en) * 2022-04-13 2022-08-12 哈尔滨工业大学(深圳) Subspace clustering method, subspace clustering device, subspace clustering equipment and storage medium
CN115761256A (en) * 2022-09-14 2023-03-07 大连海事大学 Hyperspectral image waveband selection method based on depth multi-level representation learning
CN115908880A (en) * 2021-09-29 2023-04-04 西南科技大学 Multi-view subspace clustering method based on dual tensor
CN116597186A (en) * 2023-03-20 2023-08-15 广西大学 Multi-view subspace clustering method, system, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063687A1 (en) * 2013-08-30 2015-03-05 Siemens Aktiengesellschaft Robust subspace recovery via dual sparsity pursuit
CN107316050A (en) * 2017-05-19 2017-11-03 中国科学院西安光学精密机械研究所 Subspace self-expression model clustering method based on Cauchy loss function
CN109063757A (en) * 2018-07-20 2018-12-21 西安电子科技大学 It is diagonally indicated based on block and the multifarious multiple view Subspace clustering method of view
CN110414560A (en) * 2019-06-26 2019-11-05 武汉大学 A kind of autonomous Subspace clustering method for high dimensional image
CN111144463A (en) * 2019-12-17 2020-05-12 中国地质大学(武汉) Hyperspectral image clustering method based on residual subspace clustering network
CN112164067A (en) * 2020-10-12 2021-01-01 西南科技大学 Medical image segmentation method and device based on multi-mode subspace clustering
CN112270345A (en) * 2020-10-19 2021-01-26 西安工程大学 Clustering algorithm based on self-supervision dictionary learning
CN113111976A (en) * 2021-05-13 2021-07-13 华南理工大学 Depth subspace clustering method and system based on low-rank tensor self-expression
CN115908880A (en) * 2021-09-29 2023-04-04 西南科技大学 Multi-view subspace clustering method based on dual tensor
CN114529745A (en) * 2022-01-11 2022-05-24 山东师范大学 Missing multi-view subspace clustering method and system based on graph structure learning
CN114612671A (en) * 2022-02-21 2022-06-10 哈尔滨工业大学(深圳) Multi-view subspace clustering method, device, equipment and storage medium
CN114897053A (en) * 2022-04-13 2022-08-12 哈尔滨工业大学(深圳) Subspace clustering method, subspace clustering device, subspace clustering equipment and storage medium
CN114821142A (en) * 2022-04-26 2022-07-29 安徽工业大学芜湖技术创新研究院 Image clustering method and system based on depth subspace fuzzy clustering
CN115761256A (en) * 2022-09-14 2023-03-07 大连海事大学 Hyperspectral image waveband selection method based on depth multi-level representation learning
CN116597186A (en) * 2023-03-20 2023-08-15 广西大学 Multi-view subspace clustering method, system, electronic equipment and storage medium

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
MAOSHAN LIU 等: "Self-Supervised Convolutional Subspace Clustering Network with the Block Diagonal Rgularizer", 《NEURAL PROCESSING LETTERS》, vol. 53, no. 2021, 2 August 2021 (2021-08-02), pages 3849 - 3975 *
NAN ZHAO 等: "Robust multi-view subspace clustering based on consensus representation and orthogonal diversity", 《NEURAL NETWORKS》, vol. 150, no. 2022, 30 June 2022 (2022-06-30), pages 102 - 111 *
YOUDONG HE 等: "Multi-view deep subspace clustering with multilevel self-representation matrix fusion", 《PROCEEDINGS OF THE 2023 2ND INTERNATIONAL CONFERENCE ON ALGORITHMS, DATA MINING, AND INFORMATION TECHNOLOGY》, 30 September 2023 (2023-09-30), pages 120 - 125 *
张红 等: "结构化稀疏低秩子空间聚类", 《计算机工程与应用》, vol. 53, no. 24, 15 December 2017 (2017-12-15), pages 23 - 29 *
李理 等: "基于张量学习的潜在多视图子空间聚类", 《西南科技大学学报》, vol. 37, no. 03, 30 September 2022 (2022-09-30), pages 52 - 59 *
杨冰: "基于图正则化的子空间聚类算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2022, 15 January 2022 (2022-01-15), pages 138 - 747 *
程佳丰: "基于深度学习的多视觉聚类研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2022, 15 April 2022 (2022-04-15), pages 138 - 484 *

Also Published As

Publication number Publication date
CN117611931B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
Drenkow et al. A systematic review of robustness in deep learning for computer vision: Mind the gap?
CN109376804B (en) Hyperspectral remote sensing image classification method based on attention mechanism and convolutional neural network
Song et al. Multi-layer discriminative dictionary learning with locality constraint for image classification
CN109858575B (en) Data classification method based on convolutional neural network
CN112765352A (en) Graph convolution neural network text classification method based on self-attention mechanism
CN110866439B (en) Hyperspectral image joint classification method based on multi-feature learning and super-pixel kernel sparse representation
CN113222998B (en) Semi-supervised image semantic segmentation method and device based on self-supervised low-rank network
CN102682306B (en) Wavelet pyramid polarization texture primitive feature extracting method for synthetic aperture radar (SAR) images
CN113487629A (en) Image attribute editing method based on structured scene and text description
CN109190511A (en) Hyperspectral classification method based on part Yu structural constraint low-rank representation
CN110837808A (en) Hyperspectral image classification method based on improved capsule network model
CN113850182B (en) DAMR _ DNet-based action recognition method
CN114898167A (en) Multi-view subspace clustering method and system based on inter-view difference detection
CN117611931B (en) Data classification method and system based on depth self-expression local block learning
CN110378356A (en) Fine granularity image-recognizing method based on multiple target Lagrange canonical
CN108052981A (en) Image classification method based on non-downsampling Contourlet conversion and convolutional neural networks
Fu et al. Structure-preserved and weakly redundant band selection for hyperspectral imagery
Chen et al. Spectral embedding fusion for incomplete multiview clustering
CN114780720A (en) Text entity relation classification method based on small sample learning
CN112417234B (en) Data clustering method and device and computer readable storage medium
CN109359694B (en) Image classification method and device based on mixed collaborative representation classifier
CN113011163A (en) Compound text multi-classification method and system based on deep learning model
CN110866560A (en) Symmetric low-rank representation subspace clustering method based on structural constraint
CN112069978A (en) Face recognition method based on mutual information and dictionary learning
CN114120397B (en) Face image reconstruction method, face image reconstruction system and data dimension reduction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant