CN112966735B - Method for fusing supervision multi-set related features based on spectrum reconstruction - Google Patents

Method for fusing supervision multi-set related features based on spectrum reconstruction Download PDF

Info

Publication number
CN112966735B
CN112966735B CN202110235178.4A CN202110235178A CN112966735B CN 112966735 B CN112966735 B CN 112966735B CN 202110235178 A CN202110235178 A CN 202110235178A CN 112966735 B CN112966735 B CN 112966735B
Authority
CN
China
Prior art keywords
matrix
steps
inter
method comprises
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110235178.4A
Other languages
Chinese (zh)
Other versions
CN112966735A (en
Inventor
袁运浩
朱莉
李云
强继朋
朱毅
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou University
Original Assignee
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou University filed Critical Yangzhou University
Publication of CN112966735A publication Critical patent/CN112966735A/en
Application granted granted Critical
Publication of CN112966735B publication Critical patent/CN112966735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for fusing relevant features of a supervision multi-set based on spectrum reconstruction, which comprises the following steps of 1) defining the projection direction of a training sample set; 2) Calculating inter-group intra-class correlation matrix and auto-covariance matrix of the training samples; 3) Singular value decomposition is carried out on the inter-group intra-class correlation matrix, and eigenvalue decomposition is carried out on the auto-covariance matrix; 4) Reconstructing a fractional inter-class correlation matrix and a fractional auto-covariance matrix; 5) Constructing an optimization model of FDMCCA; 6) Solving a feature vector matrix to form a projection matrix; 7) Fusing the feature after dimension reduction; 8) And selecting different numbers of images to respectively serve as a training set and a testing set, and calculating the recognition rate. The invention can effectively solve the information fusion problem of multiple view data, and meanwhile, the introduction of fractional order parameters weakens the influence caused by noise interference and limited training samples, and improves the accuracy of system identification.

Description

Method for fusing supervision multi-set related features based on spectrum reconstruction
Technical Field
The invention relates to the field of pattern recognition, in particular to a method for fusing relevant features of a supervision multi-set based on spectrum reconstruction.
Background
A typical correlation analysis (CCA) investigated the linear correlation between the two sets of data. CCA may project two sets of random variables linearly into the low-dimensional subspace with the greatest correlation. Researchers use CCA to reduce the dimensionality of two sets of feature vectors (i.e., two views) simultaneously to obtain two low-dimensional feature representations, which are then effectively fused to form discriminant features, thereby improving pattern classification accuracy. The CCA method is simple and effective, and has wide application in the aspects of blind source separation, computer vision, voice recognition and the like.
Typical correlation analysis is an unsupervised linear learning method. However, there are cases in real life where the dependency between two views cannot be represented simply in a linear manner. If there is a non-linear relationship between the two views, it is not appropriate to still deal with this by the CCA method in this case. The proposed core canonical correlation analysis (KCCA) effectively solves the non-linearity problem. KCCA is a nonlinear extension of CCA and has a good effect in dealing with simple nonlinear problems. Depth canonical correlation analysis (Deep CCA) may better address such problems when more complex nonlinear problems are encountered. Deep CCA combines a Deep neural network with CCA, allowing for learning of complex nonlinear relationships of two view data. From another perspective of nonlinear expansion, the concept of locality can be incorporated into CCA, resulting from the locality preserving typical correlation analysis (LPCCA) approach. The LPCCA may discover the local manifold structure of each view data, thereby visualizing the data.
Although CCA has better recognition effect on some mode recognition problems, the CCA is an unsupervised learning method, and class label information is not fully used, so that not only is resource waste caused, but also the recognition effect is greatly reduced. To solve this problem, researchers have proposed Discriminant Canonical Correlation Analysis (DCCA) taking into account the inter-class and intra-class information of the sample. The DCCA method maximizes the degree of correlation between sample features of the same category and minimizes the degree of correlation between sample features of different categories, thus improving accuracy of pattern classification.
The above methods are all suitable for analyzing the relationship between two views, and there is a limitation in application when three or more views exist. The multiple set canonical correlation analysis (MCCA) method is a multi-view extension of the CCA method. The MCCA not only reserves the characteristic of maximum correlation between the views of the CCA, but also makes up the defect that the CCA cannot be applied to a plurality of views, and improves the identification performance of the CCA method. Researchers combine MCCA with DCCA, have already proposed and distinguish the multiple set and typically correlate analysis (DMCCA), the experiment proves that this method has better recognition performance in aspects such as face recognition, handwritten number recognition, emotion recognition, etc.
When noise interference is present or training samples are small, the auto-and cross-covariance matrices in the CCA deviate from the true values, resulting in poor final recognition. In order to solve the problem, researchers combine the fractional order idea with CCA, reconstruct an auto-covariance matrix and a cross-covariance matrix by introducing fractional order parameters, and propose typical correlation analysis of fractional order embedding, so that the influence caused by the deviation is weakened, and the identification performance of the method is improved.
The traditional typical correlation analysis mainly researches the correlation between two views, is an unsupervised learning method, does not consider class label information, and cannot directly process high-dimensional data of more than two views.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a supervised multi-set correlation feature fusion method (FDMCCA) based on spectrum reconstruction, which can effectively solve the problem of multi-view feature fusion, and meanwhile, the introduction of fractional order parameters weakens the influence caused by noise interference and limited training samples, and improves the accuracy rate of system identification.
The purpose of the invention is realized in the following way: a method for fusing the correlation features of a supervision multi-set based on spectrum reconstruction comprises the following steps:
step 1) assume that there are P sets of training samples, each set of which has a mean value of 0 and a number of categories of c, as follows:
wherein the method comprises the steps ofThe kth sample, m, representing the jth class in the ith group i Representing the feature dimension, n, of the ith set of data j Representing the j-th sample number, defining the projection direction of the training sample set as +.>
Step 2) calculating an intra-class correlation matrix of the inter-group training samplesSum-of-covariance matrixWherein->A matrix representing 1 for each element;
step 3) correlating the inter-group class correlation matrix obtained in step 2)Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and an autocovariance matrix C ii Performing eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 4) selecting proper fractional order parameters alpha and beta, reassigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrixAnd fractional order autocovariance matrix +.>
Step 5) constructing an optimized FDMCCA model asWherein the method comprises the steps ofIntroducing Lagrangian multiplier method to obtain generalized eigenvalue problem Eω=μFω, and calculating projection direction ω, wherein μ is eigenvalue,
step 6) taking into consideration the situation that the autocovariance matrix is probably a singular matrix, introducing regularization parameter eta on the basis of step 5), and establishing an optimization model under regularization asThe Lagrangian multiplier method is introduced to obtain the following generalized eigenvalue problems:
wherein the method comprises the steps ofIs of size m i ×m i I=1, 2, …, P;
step 7) solving the feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in step 6), thereby forming a projection matrix W of each group of data i =[ω i1i2 ,…,ω id ],i=1,2,…,P,d≤min{m 1 ,…,m P };
Step 8) projection matrix W using each set of data i Respectively calculating the low-dimensional projection of each group of training samples and test samples, and then adopting a serial characteristic fusion strategy to form fusion characteristics finally used for classification; and calculates the recognition rate.
Further, the inter-group class correlation matrix of step 3)Singular value decomposition and autocovariance matrix C ii The characteristic value decomposition comprises the following steps:
step 3-1) correlation matrix in inter-group classSingular value decomposition:
wherein the method comprises the steps ofAnd->Are respectively->Left and right singular vector matrices, ">Is->Diagonal matrix of singular values of (2), and +.>
Step 3-2) for auto-covariance matrix C ii And (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs C ii Feature vector matrix, ">Is C ii A diagonal matrix of eigenvalues of (c), and r i =rank(C ii )。
Further, constructing the inter-class correlation matrix of fractional order as described in step 4)And fractional order autocovariance matrix +.>Comprises the following steps:
step 4-1) defines inter-class correlation matrices between fractional order groups assuming that α is a fraction and satisfies 0.ltoreq.α.ltoreq.1The method comprises the following steps:
wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1).
Step 4-2) defining a fractional order autocovariance matrix assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1The method comprises the following steps:
wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
Compared with the prior art, the invention has the beneficial effects that: based on typical correlation analysis, the method combines the typical correlation analysis (FECCA) with the discriminant multiple set typical correlation analysis (DMCCA), fully utilizes the label-like information, can process the information fusion problem of more than two views, can be applied to multi-view feature fusion, weakens the influence caused by noise interference and limited training samples due to the introduction of fractional parameters, and improves the accuracy of face recognition; when the training sample number is smaller, the invention has better recognition effect; the method is suitable for dimension reduction and feature fusion of multiple views; because of carrying the label-like information, the identification effect of the invention is better than other methods in the same method.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a line graph of the invention as a function of dimension, along with other methods.
FIG. 3 is a graph of recognition rates for different numbers of training samples according to the present invention.
Detailed Description
As shown in fig. 1, a method for fusing features related to a plurality of supervised sets based on spectrum reconstruction is characterized by comprising the following steps:
step 1) assume that there are P sets of training samples, each set of which has a mean value of 0 and a number of categories of c, as follows
Wherein the method comprises the steps ofThe kth sample, m, representing the jth class in the ith group i Representing the feature dimension, n, of the ith set of data j Representing the j-th sample number, defining the projection direction of the training sample set as +.>
Step 2) calculating an intra-class correlation matrix of the inter-group training samplesSum-of-covariance matrixWherein->A matrix representing 1 for each element;
step 3) correlating the inter-group class correlation matrix obtained in step 2)Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and an autocovariance matrix C ii Performing eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) correlation matrix in inter-group classSingular value decomposition:
wherein the method comprises the steps ofAnd->Are respectively->Left and right singular vector matrices, ">Is->Diagonal matrix of singular values of (2), and +.>
Step 3-2) for auto-covariance matrix C ii And (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs C ii Feature vector matrix, ">Is C ii A diagonal matrix of eigenvalues of (c), and r i =rank(C ii )。
Step 4) selecting proper fractional order parameters alpha and beta, reassigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrixAnd fractional order autocovariance matrix +.>
Step 4-1) defines inter-class correlation matrices between fractional order groups assuming that α is a fraction and satisfies 0.ltoreq.α.ltoreq.1The method comprises the following steps:
wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1);
step 4-2) defining a fractional order autocovariance matrix assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1The method comprises the following steps:
wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
Step 5) constructing an optimized FDMCCA model asWherein the method comprises the steps ofIntroducing Lagrangian multiplier method to obtain generalized eigenvalue problem Eω=μFω, and calculating projection direction ω, wherein μ is eigenvalue,
step 6) taking into consideration the situation that the autocovariance matrix is probably a singular matrix, introducing a regularization parameter eta on the basis of the step 5), and establishing the most excellent under regularizationOptimizing the model asThe Lagrangian multiplier method is introduced to obtain the following generalized eigenvalue problems:
wherein the method comprises the steps ofIs of size m i ×m i I=1, 2, …, P;
step 7) solving the feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in step 6), thereby forming a projection matrix W of each group of data i =[ω i1i2 ,…,ω id ],i=1,2,…,P,d≤min{m 1 ,…,m P };
Step 8) projection matrix W using each set of data i Respectively calculating the low-dimensional projection of each group of training samples and test samples, and then adopting a serial characteristic fusion strategy to form fusion characteristics finally used for classification; and calculates the recognition rate.
The invention is further illustrated by the following examples: taking the CMU-PIE face database as an example, the CMU-PIE face database contains 68 face images, each of which is 64 x 64 in size. In this experiment, the first 10 images of each person were used as training sets and the last 14 images were used as test sets. The input face image data is read to form three different characteristics, namely: feature 1 is raw image data, feature 2 is median filtered image data, and feature 3 is mean filtered image data. The dimensions of each feature are reduced using principal component analysis to form the final three sets of feature data.
Step 1) constructing three groups of data X with average value of 0 i I=1, 2,3, defining the projection direction of the training sample set as
The goal of step 2) FDMCCA is to maximize the correlation of samples within a class while minimizing the correlation of samples between classes. Computing intra-class correlation matrices for inter-group training samplesSum-of-covariance matrixWherein->A matrix representing 1 for each element;
step 3) correlating the inter-group class correlation matrix obtained in step 2)Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and an autocovariance matrix C ii Performing eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) correlation matrix in inter-group classSingular value decomposition:
wherein the method comprises the steps ofAnd->Are respectively->Is a matrix of left and right singular vectors of (c),is->Diagonal matrix of singular values of (2), and +.>
Step 3-2) for auto-covariance matrix C ii And (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs C ii Feature vector matrix, ">Is C ii A diagonal matrix of eigenvalues of (c), and r i =rank(C ii )。
Step 4) defining the value range of the fractional order parameters alpha and beta to be {0.1,0.2, …,1}, selecting proper fractional order parameters alpha and beta, reassigning the singular value matrix and the eigenvalue matrix obtained in step 3), and constructing a fractional order inter-class correlation matrixAnd fractional order autocovariance matrix +.>
Step 4-1) defines inter-class correlation matrices between fractional order groups assuming that α is a fraction and satisfies 0.ltoreq.α.ltoreq.1The method comprises the following steps:
wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1).
Step 4-2) defining a fractional order autocovariance matrix assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1The method comprises the following steps:
wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
Step 5) constructing an optimized FDMCCA model asWherein the method comprises the steps ofBy introducing Lagrangian multiplier method, generalized eigenvalue problem Eω=μFω can be obtained, and projection direction ω can be obtained, wherein
Step 6) taking into account the fact that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta based on the step 5), wherein the eta takes a value range of {10 } -5 ,10 -4 …,10}, building an optimization model under regularization asThe lagrange multiplier method is introduced, so that the following generalized eigenvalue problem can be obtained:
and 7) obtaining a projection direction omega according to the generalized eigenvalue problem in the step 6), calculating the projection of the test sample in the projection direction, classifying by using a nearest neighbor classifier by adopting a serial feature fusion strategy, and calculating the recognition rate. Solving feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in the step 6), thereby forming a projection matrix W of each group of data i =[ω i1i2 ,…,ω id ],i=1,2,3,d≤min{m 1 ,m 2 ,m 3 };
Step 8) projection matrix W using each set of data i Respectively calculating the low-dimensional projection of each group of training samples and test samples, and then adopting a serial characteristic fusion strategy to form fusion characteristics finally used for classification; and classifying by using a nearest neighbor classifier, and calculating the recognition rate. The recognition rate results are shown in table 1 and fig. 2 (basic refers to the classification result after three features are connected in series). As can be seen from table 1 and fig. 2, the FDMCCA method proposed by the present invention has a better effect than other methods. This is because: compared with MCCA, CCA, BASELINE, FDMCCA is a supervised learning method with priori information, and can obtain better recognition effect. Compared with the DMCCA, the FDMCCA introduces fractional order thought to correct covariance deviation caused by noise interference and other factors, and improves the identification accuracy.
TABLE 1 identification rates on CMU-PIE datasets
Method Recognition rate (%)
MCCA 84.09
CCA (characteristic 1+ characteristic 2) 71.43
CCA (characteristic 1+ characteristic 3) 74.03
CCA (characteristic 2+ characteristic 3) 76.30
BASELINE 48.05
DMCCA 79.22
FDMCCA 86.04
In order to examine the influence of the number of training samples on the recognition rate, the invention fixes the fractional order parameters alpha, beta and regularization parameters eta, and selects different numbers of images to respectively serve as a training set and a testing set, wherein the recognition rate is shown in figure 3. As can be seen from fig. 3, FDMCCA works better when there are fewer training samples.
In summary, the invention is based on the CCA method, and introduces a fractional order embedding idea to provide a supervised multi-set correlation feature fusion method (FDMCCA) based on spectrum reconstruction. The method can correct the deviation of the intra-class correlation matrix and the auto-covariance matrix caused by noise interference and limited training samples by introducing fractional order parameters. Meanwhile, the method fully utilizes the label-like information, can process the information fusion problem of more than two views, and has wider application range and better recognition performance.
The invention is not limited to the above embodiments, and based on the technical solution disclosed in the invention, a person skilled in the art may make some substitutions and modifications to some technical features thereof without creative effort according to the technical content disclosed, and all the substitutions and modifications are within the protection scope of the invention.

Claims (3)

1. A method for fusing the correlation features of a supervision multi-set based on spectrum reconstruction is characterized by comprising the following steps:
step 1) assume that there are P sets of training samples, which are face images, with the average value of each set of samples being 0 and the number of categories being c, as follows:
wherein the method comprises the steps ofThe kth sample, m, representing the jth class in the ith group i Representing the feature dimension, n, of the ith set of data j Representing the j-th sample number, defining the projection direction of the training sample set as +.>
Step 2) calculating inter-group inter-class correlation matrix of inter-group training samplesSum-of-covariance matrixWherein-> A matrix representing 1 for each element;
step 3) correlating the inter-group class correlation matrix obtained in step 2)Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and an autocovariance matrix C ii Performing eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 4) selecting proper fractional order parameters alpha and beta, reassigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrixAnd fractional order autocovariance matrix +.>
Step 5) optimization of construction of FDMCCA
The model isWherein->Introducing Lagrangian multiplier method to obtain generalized eigenvalue problem Eω=μFω, and calculating projection direction ω, wherein μ is eigenvalue,
step 6) taking into consideration the situation that the autocovariance matrix is probably a singular matrix, introducing regularization parameter eta on the basis of step 5), and establishing an optimization model under regularization asThe Lagrangian multiplier method is introduced to obtain the following generalized eigenvalue problems:
wherein the method comprises the steps ofIs of size m i ×m i I=1, 2, …, P;
step 7) solving the feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in step 6), thereby forming a projection matrix W of each group of data i =[ω i1i2 ,…,ω id ],i=1,2,…,P,d≤min{m 1 ,…,m P };
Step 8) projection matrix W using each set of data i Respectively calculating the low-dimensional projection of each group of training samples and test samples, and then adopting a serial characteristic fusion strategy to form fusion characteristics finally used for classification; and calculates the recognition rate.
2. The method of claim 1, wherein the inter-group intra-class correlation matrix of step 3) isSingular value decomposition and autocovariance matrix C ii The characteristic value decomposition comprises the following steps:
step 3-1) correlation matrix in inter-group classSingular value decomposition:
wherein the method comprises the steps ofAnd->Are respectively->Is a matrix of left and right singular vectors of (c),is->Diagonal matrix of singular values of (2), and +.>
Step 3-2) for auto-covariance matrix C ii And (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs C ii Feature vector matrix, ">Is C ii A diagonal matrix of eigenvalues of (c), and r i =rank(C ii )。
3. A method for supervised multiset correlation feature fusion based on spectral reconstruction as claimed in claim 1 or claim 2, wherein the construction of the fractional inter-class correlation matrix of step 4) is performed byAnd fractional order autocovariance matrix +.>Comprises the following steps:
step 4-1) defines a fractional order set assuming that α is a fraction and satisfying 0.ltoreq.α.ltoreq.1Inter-class correlation matrixThe method comprises the following steps:
wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1);
step 4-2) defining a fractional order autocovariance matrix assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1The method comprises the following steps:
wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
CN202110235178.4A 2020-11-20 2021-03-03 Method for fusing supervision multi-set related features based on spectrum reconstruction Active CN112966735B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011307376 2020-11-20
CN2020113073769 2020-11-20

Publications (2)

Publication Number Publication Date
CN112966735A CN112966735A (en) 2021-06-15
CN112966735B true CN112966735B (en) 2023-09-12

Family

ID=76276287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110235178.4A Active CN112966735B (en) 2020-11-20 2021-03-03 Method for fusing supervision multi-set related features based on spectrum reconstruction

Country Status (1)

Country Link
CN (1) CN112966735B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887509B (en) * 2021-10-25 2022-06-03 济南大学 Rapid multi-modal video face recognition method based on image set
CN114510966B (en) * 2022-01-14 2023-04-28 电子科技大学 End-to-end brain causal network construction method based on graph neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450499A (en) * 2018-12-13 2019-03-08 电子科技大学 A kind of robust Beamforming Method estimated based on steering vector and spatial power
WO2020010602A1 (en) * 2018-07-13 2020-01-16 深圳大学 Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
CN111611963A (en) * 2020-05-29 2020-09-01 扬州大学 Face recognition method based on neighbor preserving canonical correlation analysis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111194448A (en) * 2017-10-13 2020-05-22 日本电信电话株式会社 Pseudo data generating device, method thereof, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010602A1 (en) * 2018-07-13 2020-01-16 深圳大学 Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
CN109450499A (en) * 2018-12-13 2019-03-08 电子科技大学 A kind of robust Beamforming Method estimated based on steering vector and spatial power
CN111611963A (en) * 2020-05-29 2020-09-01 扬州大学 Face recognition method based on neighbor preserving canonical correlation analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于随机矩阵理论决定多元GARCH模型最佳维度研究;惠晓峰;李冰娜;;运筹与管理(第04期);全文 *

Also Published As

Publication number Publication date
CN112966735A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
Zhang et al. Classification modeling method for near‐infrared spectroscopy of tobacco based on multimodal convolution neural networks
CN104573729B (en) A kind of image classification method based on core principle component analysis network
CN111695456B (en) Low-resolution face recognition method based on active discriminant cross-domain alignment
CN112966735B (en) Method for fusing supervision multi-set related features based on spectrum reconstruction
CN107169117B (en) Hand-drawn human motion retrieval method based on automatic encoder and DTW
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN109993236A (en) Few sample language of the Manchus matching process based on one-shot Siamese convolutional neural networks
CN105760821A (en) Classification and aggregation sparse representation face identification method based on nuclear space
CN112613536A (en) Near infrared spectrum diesel grade identification method based on SMOTE and deep learning
CN106991355A (en) The face identification method of the analytical type dictionary learning model kept based on topology
CN104966075B (en) A kind of face identification method and system differentiating feature based on two dimension
CN102142082A (en) Virtual sample based kernel discrimination method for face recognition
CN103177265A (en) High-definition image classification method based on kernel function and sparse coding
CN112465062A (en) Clustering method based on manifold learning and rank constraint
CN110874576A (en) Pedestrian re-identification method based on canonical correlation analysis fusion features
CN110399814B (en) Face recognition method based on local linear representation field adaptive measurement
CN113177587A (en) Generalized zero sample target classification method based on active learning and variational self-encoder
Zheng et al. Limit results for distributed estimation of invariant subspaces in multiple networks inference and PCA
CN113408616B (en) Spectral classification method based on PCA-UVE-ELM
CN110991554A (en) Improved PCA (principal component analysis) -based deep network image classification method
CN117392450A (en) Steel material quality analysis method based on evolutionary multi-scale feature learning
CN111079657B (en) Face recognition method based on semi-supervised linear regression
CN109063766B (en) Image classification method based on discriminant prediction sparse decomposition model
CN112966734B (en) Discrimination multiple set typical correlation analysis method based on fractional order spectrum
CN106815844A (en) Matting method based on manifold learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant