CN112966735A - Supervision multi-set correlation feature fusion method based on spectral reconstruction - Google Patents

Supervision multi-set correlation feature fusion method based on spectral reconstruction Download PDF

Info

Publication number
CN112966735A
CN112966735A CN202110235178.4A CN202110235178A CN112966735A CN 112966735 A CN112966735 A CN 112966735A CN 202110235178 A CN202110235178 A CN 202110235178A CN 112966735 A CN112966735 A CN 112966735A
Authority
CN
China
Prior art keywords
matrix
class
fractional order
auto
correlation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110235178.4A
Other languages
Chinese (zh)
Other versions
CN112966735B (en
Inventor
袁运浩
朱莉
李云
强继朋
朱毅
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou University
Original Assignee
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou University filed Critical Yangzhou University
Publication of CN112966735A publication Critical patent/CN112966735A/en
Application granted granted Critical
Publication of CN112966735B publication Critical patent/CN112966735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Algebra (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for fusing relevant features of a supervised multi-set based on spectral reconstruction, which comprises the following steps of 1) defining the projection direction of a training sample set; 2) calculating an inter-group intra-class correlation matrix and an autocovariance matrix of the training samples; 3) performing singular value decomposition on the correlation matrix in the interclass, and performing eigenvalue decomposition on the auto-covariance matrix; 4) reconstructing a fractional order inter-class correlation matrix and a fractional order auto-covariance matrix; 5) constructing an optimized model of FDMCCA; 6) solving the eigenvector matrix to form a projection matrix; 7) fusing the reduced features; 8) and selecting different numbers of images to respectively perform a training set and a testing set, and calculating the recognition rate. The invention can effectively process the information fusion problem of a plurality of view data, and simultaneously, the introduction of the fractional order parameter weakens the influence caused by noise interference and limited training samples, and improves the accuracy of system identification.

Description

Supervision multi-set correlation feature fusion method based on spectral reconstruction
Technical Field
The invention relates to the field of pattern recognition, in particular to a supervision multi-set correlation characteristic fusion method based on spectral reconstruction.
Background
A typical correlation analysis (CCA) investigated the linear correlation between two sets of data. The CCA may linearly project two sets of random variables into a low-dimensional subspace with the greatest correlation. Researchers use CCA to simultaneously reduce the dimensions of two sets of feature vectors (i.e., two views) to obtain two low-dimensional feature representations, which are then effectively fused to form discriminative features, thereby improving the classification accuracy of patterns. Because the CCA method is simple and effective, the CCA method has wide application in blind source separation, computer vision, voice recognition and the like.
The canonical correlation analysis is an unsupervised linear learning method. However, in real life there are situations where the dependency between two views cannot be simply represented linearly. If there is a non-linear relationship between the two views, it is not appropriate to still handle the CCA method in this case. The proposal of Kernel Canonical Correlation Analysis (KCCA) effectively solves the nonlinear problem. KCCA is a nonlinear extension of CCA and has a good effect in dealing with simple nonlinear problems. When more complex non-linear problems are encountered, Deep canonical correlation analysis (Deep CCA) may better address such problems. Deep CCA combines a Deep neural network with CCA, and can learn a complex nonlinear relationship of two view data. From another perspective of non-linear expansion, the idea of locality can be incorporated into CCA, and a Locality Preserving Canonical Correlation Analysis (LPCCA) method arises. The LPCCA can find a local manifold structure of each view data to visualize the data.
Although the CCA has a good recognition effect on some pattern recognition problems, it is an unsupervised learning method and does not fully use class label information, which not only causes resource waste, but also reduces the recognition effect. To address this problem, researchers have proposed Discriminant Canonical Correlation Analysis (DCCA) that takes into account the inter-class and intra-class information of the sample. The DCCA method enables the correlation degree between the sample characteristics of the same category to be maximum, and the correlation degree between the sample characteristics of different categories to be minimum, so that the accuracy of mode classification can be improved.
The above methods are all methods suitable for analyzing the relationship between two views, and the application of the above methods is limited when there are three or more views. The multiple-set canonical correlation analysis (MCCA) method is a multi-view extension of the CCA method. The MCCA not only reserves the characteristic of maximum correlation degree between the CCA views, but also makes up the defect that the CCA cannot be applied to a plurality of views, and improves the identification performance of the CCA method. Researchers combine MCCA and DCCA, and have proposed discrimination multiple set canonical correlation analysis (DMCCA), and experiments prove that the method has better recognition performance in the aspects of face recognition, handwritten number recognition, emotion recognition and the like.
When noise interference exists or training samples are few, the auto-covariance matrix and the cross-covariance matrix in the CCA deviate from the true values, resulting in poor final recognition. In order to solve the problem, researchers combine the fractional order thought with CCA, reconstruct an auto-covariance matrix and a cross-covariance matrix by introducing a fractional order parameter, and provide typical correlation analysis of fractional order embedding, so that the influence caused by the deviation is weakened, and the identification performance of the method is improved.
The traditional typical correlation analysis mainly studies the correlation between two views, is an unsupervised learning method, does not consider class label information, and cannot directly process high-dimensional data of more than two views.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a supervision multi-set correlation feature fusion method (FDMCCA) based on spectral reconstruction, can effectively process the problem of multi-view feature fusion, simultaneously weakens the influence caused by noise interference and limited training samples due to the introduction of fractional order parameters, and improves the accuracy of system identification.
The purpose of the invention is realized as follows: a supervised multi-set correlation feature fusion method based on spectral reconstruction comprises the following steps:
step 1) assume that there are P groups of training samples, with the mean of each group of samples being 0 and the number of classes being c, as follows:
Figure BDA0002959684760000031
wherein
Figure BDA0002959684760000032
Denotes the kth sample, m, of the jth class in the ith groupiCharacteristic dimension, n, representing the ith data setjRepresenting the j-th class sample number, and defining the projection direction of the training sample set as
Figure BDA0002959684760000033
Step 2) calculating an intra-class correlation matrix of the interclass training samples
Figure BDA0002959684760000034
Sum auto-covariance matrix
Figure BDA0002959684760000035
Wherein
Figure BDA0002959684760000036
A matrix representing each element as 1;
step 3) for the inter-group intra-class correlation matrix obtained in step 2)
Figure BDA0002959684760000037
Performing singular value decomposition to obtain left and right singular vector matrixes, singular value matrixes and auto-covariance matrixes CiiPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 4) selecting proper fractional order parameters alpha and beta, re-assigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrix
Figure BDA0002959684760000038
And fractional order auto-covariance matrix
Figure BDA0002959684760000039
Step 5) constructing an optimized model of FDMCCA as
Figure BDA00029596847600000310
Wherein
Figure BDA00029596847600000311
Introducing a Lagrange multiplier method to obtain a generalized characteristic value problem E omega which is mu F omega, calculating a projection direction omega, wherein mu is a characteristic value,
Figure BDA0002959684760000041
step 6) considering the situation that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta on the basis of the step 5), and establishing an optimization model under the regularization as
Figure BDA0002959684760000042
The Lagrange multiplier method is introduced to obtain the following generalizedProblem of eigenvalue:
Figure BDA0002959684760000043
wherein
Figure BDA0002959684760000044
Is of size mi×mi1,2, …, P;
step 7) solving eigenvectors corresponding to the first d maximum eigenvalues according to the generalized eigenvalue problem in the step 6), thereby forming a projection matrix W of each group of datai=[ωi1i2,…,ωid],i=1,2,…,P,d≤min{m1,…,mP};
Step 8) utilizing the projection matrix W of each group of dataiRespectively calculating the low-dimensional projection of each group of training samples and testing samples, and then forming fusion features finally used for classification by adopting a serial feature fusion strategy; and calculating the recognition rate.
Further, the correlation matrix in the inter-pair inter-class in step 3)
Figure BDA0002959684760000045
Singular value decomposition and auto-covariance matrix CiiThe characteristic value decomposition comprises the following steps:
step 3-1) to the inter-group intra-class correlation matrix
Figure BDA0002959684760000046
Singular value decomposition is carried out:
Figure BDA0002959684760000047
wherein
Figure BDA0002959684760000048
And
Figure BDA0002959684760000049
are respectively
Figure BDA00029596847600000410
The left and right singular vector matrices of (a),
Figure BDA00029596847600000411
is that
Figure BDA00029596847600000412
A diagonal matrix of singular values of, and
Figure BDA00029596847600000413
step 3-2) on the autocovariance matrix CiiAnd (3) carrying out characteristic value decomposition:
Figure BDA0002959684760000051
wherein
Figure BDA0002959684760000052
Is CiiThe matrix of feature vectors of (a) is,
Figure BDA0002959684760000053
is CiiAnd r, and ri=rank(Cii)。
Further, the step 4) of constructing the fractional order inter-class correlation matrix
Figure BDA0002959684760000054
And fractional order auto-covariance matrix
Figure BDA0002959684760000055
Comprises the following steps:
step 4-1) assuming alpha is a fraction and satisfying 0 ≦ alpha ≦ 1, defining a fractional order intra-class correlation matrix
Figure BDA0002959684760000056
Comprises the following steps:
Figure BDA0002959684760000057
wherein
Figure BDA0002959684760000058
UijAnd VijAnd rijThe definition is given in step 3-1).
Step 4-2) assuming that beta is a fraction and satisfying that beta is more than or equal to 0 and less than or equal to 1, defining a fractional order auto-covariance matrix
Figure BDA0002959684760000059
Comprises the following steps:
Figure BDA00029596847600000510
wherein
Figure BDA00029596847600000511
QiAnd riThe definition of (3) is given in step 3-2).
Compared with the prior art, the invention has the beneficial effects that: on the basis of canonical correlation analysis, combining fractional order embedded canonical correlation analysis (FECCA) with discrimination multiple sets canonical correlation analysis (DMCCA), fully utilizing class label information, being capable of processing information fusion problems of more than two views, being applicable to multi-view feature fusion, reducing influence caused by noise interference and limited training samples due to introduction of fractional order parameters, and improving accuracy of face recognition; when the number of training samples is small, the method has a good identification effect; feature fusion for dimensionality reduction and multiple views; because the information of the class labels is carried, the identification effect of the method is superior to that of other methods in the same class method.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a line graph of the invention versus other methods as a function of dimension.
FIG. 3 is a graph of the recognition rate of the present invention at different numbers of training samples.
Detailed Description
As shown in fig. 1, a supervised multi-set correlation feature fusion method based on spectral reconstruction is characterized by comprising the following steps:
step 1) assume that there are P sets of training samples with a mean of 0 and a number of classes c for each set of samples, as follows
Figure BDA0002959684760000061
Wherein
Figure BDA0002959684760000062
Denotes the kth sample, m, of the jth class in the ith groupiCharacteristic dimension, n, representing the ith data setjRepresenting the j-th class sample number, and defining the projection direction of the training sample set as
Figure BDA0002959684760000063
Step 2) calculating an intra-class correlation matrix of the interclass training samples
Figure BDA0002959684760000064
Sum auto-covariance matrix
Figure BDA0002959684760000065
Wherein
Figure BDA0002959684760000066
A matrix representing each element as 1;
step 3) for the inter-group intra-class correlation matrix obtained in step 2)
Figure BDA0002959684760000067
Performing singular value decomposition to obtain left and right singular vector matrixes, singular value matrixes and auto-covariance matrixes CiiPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) to the inter-group intra-class correlation matrix
Figure BDA0002959684760000068
Singular value decomposition is carried out:
Figure BDA0002959684760000069
wherein
Figure BDA00029596847600000610
And
Figure BDA00029596847600000611
are respectively
Figure BDA00029596847600000612
The left and right singular vector matrices of (a),
Figure BDA00029596847600000613
is that
Figure BDA00029596847600000614
A diagonal matrix of singular values of, and
Figure BDA00029596847600000615
step 3-2) on the autocovariance matrix CiiAnd (3) carrying out characteristic value decomposition:
Figure BDA00029596847600000616
wherein
Figure BDA00029596847600000617
Is CiiThe matrix of feature vectors of (a) is,
Figure BDA00029596847600000618
is CiiAnd r, and ri=rank(Cii)。
Step 4) selecting proper fractional order parameters alpha and beta, re-assigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrix
Figure BDA0002959684760000071
And fractional order auto-covariance matrix
Figure BDA0002959684760000072
Step 4-1) assuming alpha is a fraction and satisfying 0 ≦ alpha ≦ 1, defining a fractional order intra-class correlation matrix
Figure BDA0002959684760000073
Comprises the following steps:
Figure BDA0002959684760000074
wherein
Figure BDA0002959684760000075
UijAnd VijAnd rijThe definition is given in step 3-1);
step 4-2) assuming that beta is a fraction and satisfying that beta is more than or equal to 0 and less than or equal to 1, defining a fractional order auto-covariance matrix
Figure BDA0002959684760000076
Comprises the following steps:
Figure BDA0002959684760000077
wherein
Figure BDA0002959684760000078
QiAnd riThe definition of (3) is given in step 3-2).
Step 5) constructing an optimized model of FDMCCA as
Figure BDA0002959684760000079
Wherein
Figure BDA00029596847600000710
Introducing a Lagrange multiplier method to obtain a generalized characteristic value problem E omega which is mu F omega, calculating a projection direction omega, wherein mu is a characteristic value,
Figure BDA00029596847600000711
step 6) considering the situation that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta on the basis of the step 5), and establishing an optimization model under the regularization as
Figure BDA00029596847600000712
A Lagrange multiplier method is introduced to obtain the following generalized eigenvalue problem:
Figure BDA0002959684760000081
wherein
Figure BDA0002959684760000082
Is of size mi×mi1,2, …, P;
step 7) solving eigenvectors corresponding to the first d maximum eigenvalues according to the generalized eigenvalue problem in the step 6), thereby forming a projection matrix W of each group of datai=[ωi1i2,…,ωid],i=1,2,…,P,d≤min{m1,…,mP};
Step 8) utilizing the projection matrix W of each group of dataiRespectively calculating the low-dimensional projection of each group of training samples and testing samples, and then forming fusion features finally used for classification by adopting a serial feature fusion strategy; and calculating the recognition rate.
The invention can be further illustrated by the following examples: taking the CMU-PIE face database as an example, the CMU-PIE face database contains face images of 68 persons, and the size of each image is 64 × 64. In this experiment, the first 10 images of each person were used as a training set and the second 14 images were used as a testing set. Reading input face image data to form three different features, namely: feature 1 is the original image data, feature 2 is the median filtered image data, and feature 3 is the mean filtered image data. The dimensionality of each feature is reduced using principal component analysis to form the final three sets of feature data.
Step 1) constructing three groups of data X with the average value of 0iI-1, 2,3, defining the projection direction of the training sample set as
Figure BDA0002959684760000083
Step 2) FDMCCA aims to maximize the correlation of samples within a class and minimize the correlation of samples between classes. Computing intra-class correlation matrices for interclass training samples
Figure BDA0002959684760000084
Sum auto-covariance matrix
Figure BDA0002959684760000085
Wherein
Figure BDA0002959684760000086
A matrix representing each element as 1;
step 3) for the inter-group intra-class correlation matrix obtained in step 2)
Figure BDA0002959684760000087
Performing singular value decomposition to obtain left and right singular vector matrixes, singular value matrixes and auto-covariance matrixes CiiPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) to the inter-group intra-class correlation matrix
Figure BDA0002959684760000091
Singular value decomposition is carried out:
Figure BDA0002959684760000092
wherein
Figure BDA0002959684760000093
And
Figure BDA0002959684760000094
are respectively
Figure BDA0002959684760000095
The left and right singular vector matrices of (a),
Figure BDA0002959684760000096
is that
Figure BDA0002959684760000097
A diagonal matrix of singular values of, and
Figure BDA0002959684760000098
step 3-2) on the autocovariance matrix CiiAnd (3) carrying out characteristic value decomposition:
Figure BDA0002959684760000099
wherein
Figure BDA00029596847600000910
Is CiiThe matrix of feature vectors of (a) is,
Figure BDA00029596847600000911
is CiiAnd r, and ri=rank(Cii)。
Step 4) defining the value ranges of the fractional order parameters alpha and beta as {0.1,0.2, …,1}, selecting proper fractional order parameters alpha and beta, re-assigning the singular value matrix and the characteristic value matrix obtained in the step 3), and constructing the intra-class correlation matrix among the fractional order groups
Figure BDA00029596847600000912
And fractional order auto-covariance matrix
Figure BDA00029596847600000913
Step 4-1) assuming alpha is a fraction and satisfying 0 ≦ alpha ≦ 1, defining a fractional order intra-class correlation matrix
Figure BDA00029596847600000914
Comprises the following steps:
Figure BDA00029596847600000915
wherein
Figure BDA00029596847600000916
UijAnd VijAnd rijThe definition is given in step 3-1).
Step 4-2) assuming that beta is a fraction and satisfying that beta is more than or equal to 0 and less than or equal to 1, defining a fractional order auto-covariance matrix
Figure BDA00029596847600000917
Comprises the following steps:
Figure BDA00029596847600000918
wherein
Figure BDA00029596847600000919
QiAnd riThe definition of (3) is given in step 3-2).
Step 5) constructing an optimized model of FDMCCA as
Figure BDA00029596847600000920
Wherein
Figure BDA0002959684760000101
The Lagrange multiplier method is introduced to obtain the generalized characteristicsThe value problem E ω ═ μ F ω, and the projection direction ω is then determined, where
Figure BDA0002959684760000102
Step 6) considering the situation that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta on the basis of the step 5), wherein the eta value range is {10 }-5,10-4…,10}, establishing an optimization model under regularization as
Figure BDA0002959684760000103
The following generalized eigenvalue problem can be obtained by introducing the Lagrange multiplier method:
Figure BDA0002959684760000104
and 7) solving a projection direction omega according to the generalized characteristic value problem in the step 6), calculating the projection of the test sample in the projection direction, adopting a serial characteristic fusion strategy, classifying by using a nearest neighbor classifier, and calculating the recognition rate. Solving eigenvectors corresponding to the first d maximum eigenvalues according to the generalized eigenvalue problem in the step 6), thereby forming a projection matrix W of each group of datai=[ωi1i2,…,ωid],i=1,2,3,d≤min{m1,m2,m3};
Step 8) utilizing the projection matrix W of each group of dataiRespectively calculating the low-dimensional projection of each group of training samples and testing samples, and then forming fusion features finally used for classification by adopting a serial feature fusion strategy; and classifying by using a nearest neighbor classifier, and calculating the recognition rate. The recognition results are shown in table 1 and fig. 2 (BASELINE refers to the classification results after three features are connected in series). As can be seen from table 1 and fig. 2, the FDMCCA method proposed by the present invention is superior in effect to other methods. This is because: compared with MCCA, CCA and BASELINE, FDMCCA is a supervised learning method with prior information and can obtain better identification effect. Andcompared with DMCCA, FDMCCA introduces fractional order thought to correct covariance deviation caused by noise interference and other factors, and identification accuracy is improved.
TABLE 1 recognition Rate on CMU-PIE datasets
Method Percent identification (%)
MCCA 84.09
CCA (feature 1+ feature 2) 71.43
CCA (feature 1+ feature 3) 74.03
CCA (feature 2+ feature 3) 76.30
BASELINE 48.05
DMCCA 79.22
FDMCCA 86.04
In order to examine the influence of the number of training samples on the recognition rate, the fractional order parameters alpha and beta and the regularization parameter eta are fixed, different numbers of images are selected to be respectively used as a training set and a test set, and the recognition rate is shown in FIG. 3. As can be seen from fig. 3, the FDMCCA works better with fewer training samples.
In summary, the present invention provides a supervised multi-set correlation feature fusion method (FDMCCA) based on spectral reconstruction by introducing a fractional order embedding concept based on the CCA method. The method can correct the deviation of the intra-class correlation matrix and the auto-covariance matrix caused by noise interference and limited training samples by introducing the fractional order parameter. Meanwhile, the method makes full use of the class label information, can solve the problem of information fusion of more than two views, and has wider application range and better identification performance.
The present invention is not limited to the above-mentioned embodiments, and based on the technical solutions disclosed in the present invention, those skilled in the art can make some substitutions and modifications to some technical features without creative efforts according to the disclosed technical contents, and these substitutions and modifications are all within the protection scope of the present invention.

Claims (3)

1. A supervised multi-set correlation feature fusion method based on spectral reconstruction is characterized by comprising the following steps:
step 1) assume that there are P groups of training samples, with the mean of each group of samples being 0 and the number of classes being c, as follows:
Figure FDA0002959684750000011
wherein
Figure FDA0002959684750000012
Denotes the kth sample, m, of the jth class in the ith groupiCharacteristic dimension, n, representing the ith data setjRepresenting the j-th class sample number, and defining the projection direction of the training sample set as
Figure FDA0002959684750000013
Step 2) calculating the intra-class of the interclass training samplesCorrelation matrix
Figure FDA0002959684750000014
Sum auto-covariance matrix
Figure FDA0002959684750000015
Wherein
Figure FDA0002959684750000016
Figure FDA0002959684750000017
A matrix representing each element as 1;
step 3) for the inter-group intra-class correlation matrix obtained in step 2)
Figure FDA0002959684750000018
Performing singular value decomposition to obtain left and right singular vector matrixes, singular value matrixes and auto-covariance matrixes CiiPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 4) selecting proper fractional order parameters alpha and beta, re-assigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrix
Figure FDA0002959684750000019
And fractional order auto-covariance matrix
Figure FDA00029596847500000110
Step 5) constructing an optimized model of FDMCCA as
Figure FDA00029596847500000111
Wherein
Figure FDA00029596847500000112
Introducing a Lagrange multiplier method to obtain a generalized characteristic value problem E omega which is mu F omega, calculating a projection direction omega, wherein mu is a characteristic value,
Figure FDA0002959684750000021
step 6) considering the situation that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta on the basis of the step 5), and establishing an optimization model under the regularization as
Figure FDA0002959684750000022
A Lagrange multiplier method is introduced to obtain the following generalized eigenvalue problem:
Figure FDA0002959684750000023
wherein
Figure FDA0002959684750000024
Is of size mi×mi1,2, …, P;
step 7) solving eigenvectors corresponding to the first d maximum eigenvalues according to the generalized eigenvalue problem in the step 6), thereby forming a projection matrix W of each group of datai=[ωi1i2,…,ωid],i=1,2,…,P,d≤min{m1,…,mP};
Step 8) utilizing the projection matrix W of each group of dataiRespectively calculating the low-dimensional projection of each group of training samples and testing samples, and then forming fusion features finally used for classification by adopting a serial feature fusion strategy; and calculating the recognition rate.
2. The supervised multiple set correlation feature fusion method based on spectral reconstruction as recited in claim 1, wherein the intra-class correlation matrix of step 3) is a correlation matrix of the same class
Figure FDA0002959684750000025
Singular value decomposition and auto-covariance matrixMatrix CiiThe characteristic value decomposition comprises the following steps:
step 3-1) to the intra-class correlation matrix
Figure FDA0002959684750000026
Singular value decomposition is carried out:
Figure FDA0002959684750000027
wherein
Figure FDA0002959684750000028
And
Figure FDA0002959684750000029
are respectively
Figure FDA00029596847500000210
The left and right singular vector matrices of (a),
Figure FDA0002959684750000031
is that
Figure FDA0002959684750000032
A diagonal matrix of singular values of, and
Figure FDA0002959684750000033
step 3-2) on the autocovariance matrix CiiAnd (3) carrying out characteristic value decomposition:
Figure FDA0002959684750000034
wherein
Figure FDA0002959684750000035
Is CiiThe matrix of feature vectors of (a) is,
Figure FDA0002959684750000036
is CiiAnd r, and ri=rank(Cii)。
3. The supervised multi-set correlation feature fusion method based on spectral reconstruction as claimed in claim 1 or 2, wherein the step 4) of constructing the intergroup fractional order intergroup intra-class correlation matrix
Figure FDA0002959684750000037
And fractional order auto-covariance matrix
Figure FDA0002959684750000038
Comprises the following steps:
step 4-1) assuming alpha is a fraction and satisfying 0 ≦ alpha ≦ 1, defining a fractional order intra-class correlation matrix
Figure FDA0002959684750000039
Comprises the following steps:
Figure FDA00029596847500000310
wherein
Figure FDA00029596847500000311
UijAnd VijAnd rijThe definition is given in step 3-1);
step 4-2) assuming that beta is a fraction and satisfying that beta is more than or equal to 0 and less than or equal to 1, defining a fractional order auto-covariance matrix
Figure FDA00029596847500000312
Comprises the following steps:
Figure FDA00029596847500000313
wherein
Figure FDA00029596847500000314
QiAnd riThe definition of (3) is given in step 3-2).
CN202110235178.4A 2020-11-20 2021-03-03 Method for fusing supervision multi-set related features based on spectrum reconstruction Active CN112966735B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011307376 2020-11-20
CN2020113073769 2020-11-20

Publications (2)

Publication Number Publication Date
CN112966735A true CN112966735A (en) 2021-06-15
CN112966735B CN112966735B (en) 2023-09-12

Family

ID=76276287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110235178.4A Active CN112966735B (en) 2020-11-20 2021-03-03 Method for fusing supervision multi-set related features based on spectrum reconstruction

Country Status (1)

Country Link
CN (1) CN112966735B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887509A (en) * 2021-10-25 2022-01-04 济南大学 Rapid multi-modal video face recognition method based on image set
CN114510966A (en) * 2022-01-14 2022-05-17 电子科技大学 End-to-end brain causal network construction method based on graph neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450499A (en) * 2018-12-13 2019-03-08 电子科技大学 A kind of robust Beamforming Method estimated based on steering vector and spatial power
WO2020010602A1 (en) * 2018-07-13 2020-01-16 深圳大学 Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
US20200272422A1 (en) * 2017-10-13 2020-08-27 Nippon Telegraph And Telephone Corporation Synthetic data generation apparatus, method for the same, and program
CN111611963A (en) * 2020-05-29 2020-09-01 扬州大学 Face recognition method based on neighbor preserving canonical correlation analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200272422A1 (en) * 2017-10-13 2020-08-27 Nippon Telegraph And Telephone Corporation Synthetic data generation apparatus, method for the same, and program
WO2020010602A1 (en) * 2018-07-13 2020-01-16 深圳大学 Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
CN109450499A (en) * 2018-12-13 2019-03-08 电子科技大学 A kind of robust Beamforming Method estimated based on steering vector and spatial power
CN111611963A (en) * 2020-05-29 2020-09-01 扬州大学 Face recognition method based on neighbor preserving canonical correlation analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
惠晓峰;李冰娜;: "基于随机矩阵理论决定多元GARCH模型最佳维度研究", 运筹与管理, no. 04 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887509A (en) * 2021-10-25 2022-01-04 济南大学 Rapid multi-modal video face recognition method based on image set
CN113887509B (en) * 2021-10-25 2022-06-03 济南大学 Rapid multi-modal video face recognition method based on image set
CN114510966A (en) * 2022-01-14 2022-05-17 电子科技大学 End-to-end brain causal network construction method based on graph neural network

Also Published As

Publication number Publication date
CN112966735B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN107633513B (en) 3D image quality measuring method based on deep learning
Dubes et al. Random field models in image analysis
CN112116017B (en) Image data dimension reduction method based on kernel preservation
CN107578007A (en) A kind of deep learning face identification method based on multi-feature fusion
CN111695456B (en) Low-resolution face recognition method based on active discriminant cross-domain alignment
Shrivastava et al. Learning discriminative dictionaries with partially labeled data
CN107292225B (en) Face recognition method
CN110942091A (en) Semi-supervised few-sample image classification method for searching reliable abnormal data center
CN112966735A (en) Supervision multi-set correlation feature fusion method based on spectral reconstruction
CN112613536A (en) Near infrared spectrum diesel grade identification method based on SMOTE and deep learning
CN110956113B (en) Robust face recognition method based on secondary cooperation representation identification projection
CN110399814B (en) Face recognition method based on local linear representation field adaptive measurement
CN114898167A (en) Multi-view subspace clustering method and system based on inter-view difference detection
CN113191206B (en) Navigator signal classification method, device and medium based on Riemann feature migration
CN111079657B (en) Face recognition method based on semi-supervised linear regression
CN111611963B (en) Face recognition method based on neighbor preservation canonical correlation analysis
CN112966734B (en) Discrimination multiple set typical correlation analysis method based on fractional order spectrum
KR102225586B1 (en) System and Merhod for Log Euclidean Metric Learning using Riemannian Submanifold Framework on Symmetric Positive Definite Manifolds
Kobayashi Generalized mutual subspace based methods for image set classification
Zheng et al. Limit results for distributed estimation of invariant subspaces in multiple networks inference and PCA
CN114419382A (en) Method and system for embedding picture of unsupervised multi-view image
CN114943862A (en) Two-stage image classification method based on structural analysis dictionary learning
CN110399885B (en) Image target classification method based on local geometric perception
CN113095270A (en) Unsupervised cross-library micro-expression identification method
CN112241680A (en) Multi-mode identity authentication method based on vein similar image knowledge migration network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant