CN112966734B - Discrimination multiple set typical correlation analysis method based on fractional order spectrum - Google Patents

Discrimination multiple set typical correlation analysis method based on fractional order spectrum Download PDF

Info

Publication number
CN112966734B
CN112966734B CN202110235175.0A CN202110235175A CN112966734B CN 112966734 B CN112966734 B CN 112966734B CN 202110235175 A CN202110235175 A CN 202110235175A CN 112966734 B CN112966734 B CN 112966734B
Authority
CN
China
Prior art keywords
matrix
fractional order
intra
steps
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110235175.0A
Other languages
Chinese (zh)
Other versions
CN112966734A (en
Inventor
袁运浩
朱莉
李云
强继朋
朱毅
朱俊武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou University
Original Assignee
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou University filed Critical Yangzhou University
Publication of CN112966734A publication Critical patent/CN112966734A/en
Application granted granted Critical
Publication of CN112966734B publication Critical patent/CN112966734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • G06F18/21326Rendering the within-class scatter matrix non-singular involving optimisations, e.g. using regularisation techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying multiple sets of typical correlation analysis based on fractional order spectrum, which comprises the following steps of 1) defining the projection direction of each group of training samples; 2) Calculating a cross covariance matrix and an intra-class dispersion matrix of the training sample; 3) The cross covariance matrix is used for singular value decomposition, and the intra-class dispersion matrix is used for eigenvalue decomposition; 4) Constructing a fractional order cross covariance matrix and a fractional order intra-class dispersion matrix; 5) Constructing an optimization model of FLMCCA; 6) Solving a feature vector problem; 7) Forming a projection matrix of each set of data by using the feature vector; 8) And fusing the projected features by adopting a serial feature fusion strategy, selecting different numbers of images for training and testing, and calculating the recognition rate. According to the invention, the fractional order parameters are introduced to construct the fractional order intra-class dispersion matrix and the fractional order cross covariance matrix, so that the true value deviation caused by noise interference and limited training samples is reduced, the discrimination of the extracted low-dimensional features is enhanced, and the accuracy of system identification is improved.

Description

Discrimination multiple set typical correlation analysis method based on fractional order spectrum
Technical Field
The invention relates to the field of pattern recognition, in particular to a method for identifying multiple set typical correlation analysis based on fractional order spectrum.
Background
In the fields of pattern recognition, machine learning, and computer vision, the same set of objects can generally be described by a variety of different feature representations. Such data is commonly referred to as multiview data. Obviously, multi-view data contains more useful information than single-view data. Multi-view data, on the other hand, tends to have a higher dimension, which can lead to problems. For example, high-dimensional data requires a large amount of memory resources, increasing spatial complexity; high-dimensional data can increase computational overhead; furthermore, the easier learning task in the low-dimensional space can become exceptionally complex and difficult in the high-dimensional space. Therefore, it is necessary to perform joint dimension reduction on the high-dimensional multi-view data to obtain a low-dimensional representation with a strong discrimination, so as to improve learning performance of tasks such as pattern classification.
The main goal of multi-view dimension reduction is to jointly learn a low-dimensional representation with a strong discrimination for high-dimensional multi-view data through linear or nonlinear transformation. A typical correlation analysis (CCA) is a representative multivariate statistical analysis method that first performs a linear transformation on two sets of random variables, respectively, to obtain a low-dimensional projection, and then maximizes the correlation of the two sets of data in this low-dimensional space. Because of the good nature of CCA, researchers have used CCA to reduce the dimensions of two sets of high-dimensional feature vectors (i.e., two views) simultaneously to obtain two sets of low-dimensional correlated features; the two sets of low-dimensional features are then fused to form an identified feature vector. The CCA method is simple and effective, and has wide application in the aspects of blind source separation, computer vision, neural network, voice recognition and the like.
In practical applications, CCA has limited application when the problem arises with small samples with feature vector dimensions greater than the number of training samples. To address this problem, regularized Canonical Correlation Analysis (RCCA) has been proposed that can effectively achieve two-view dimension reduction by introducing regularization parameters into the CCA. As CCA is a linear subspace learning method, it does not effectively reveal the non-linear relationship between views. Core canonical correlation analysis (Kernel CCA, KCCA) uses core techniques to nonlinearly extend CCA to address nonlinear problems well. In addition, deep CCA (Deep CCA) combines a Deep neural network with CCA, which enables more flexible learning of the nonlinear relationship between two-view data than KCCA. From a manifold learning perspective, a local hold canonical correlation analysis (Locality Preserving CCA, LPCCA) has developed that takes into account the local manifold structure of each view data in dimension reduction.
Although the CCA method has a better recognition effect on some pattern recognition problems, it is an unsupervised learning method, and does not consider the class label information of the training samples in dimension reduction. To address this problem, researchers have proposed the Generalized Canonical Correlation Analysis (GCCA) method. The GCCA method can minimize sample spread in the class and maximize the correlation between views, and the validity of the GCCA method is verified on handwriting digital recognition.
It should be noted that the CCA and its improved method are applicable to two-view dimension reduction. When three or more views are given, the above method cannot directly analyze and dimension-reduce them. For this purpose, multiple set extensions of the CCA method have been proposed, i.e. multiple set canonical correlation analysis (Multiset CCA, MCCA). The MCCA method can directly maximize the correlation among multiple views and make up the defect that CCA can not be applied to multiple views. Recently, researchers have introduced tag information into MCCA by the idea of GCCA, suggesting a tag multiset canonical correlation analysis (LMCCA). Experimental results show that the LMCCA method has a good recognition effect in applications such as face image and handwriting digital recognition.
On the other hand, when the training samples are noisy or less, the auto-and cross-covariance matrices in the CCA method deviate from the true values, resulting in weaker discrimination of CCA features. For this reason, researchers have combined fractional order thinking with CCA to propose a typical correlation analysis method of fractional order embedding. According to the method, fractional order parameters are introduced to reconstruct spectrums of the auto-covariance matrix and the cross-covariance matrix respectively, and the fractional order auto-covariance matrix and the fractional order cross-covariance matrix are constructed on the basis, so that deviation of the covariance matrix is weakened, and the discriminant of extracted features is improved.
Although the LMCCA method can obtain a better recognition effect, when the training samples are interfered by noise or fewer training samples, the covariance deviation problem similar to the CCA method exists. In this case, the discrimination of LMCCA features is weak and does not effectively and robustly implement a true pattern classification task.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method for distinguishing multiple LMCCA (FLMCCA) based on Fractional order spectrum, which reconstructs an intra-class dispersion matrix and a cross covariance matrix of the LMCCA by introducing Fractional order parameters, thereby reducing the deviation of a true value caused by noise interference and a limited training sample, enhancing the distinguishing force of the extracted low-dimensional characteristics and improving the accuracy of system identification.
The purpose of the invention is realized in the following way: a discrimination multiple set typical correlation analysis method based on fractional order spectrum comprises the following steps:
the method for analyzing the typical correlation of the discriminant multiple sets based on the fractional order spectrum is characterized by comprising the following steps:
step 1) given P sets of zero mean training samples and each set of training samples has class c, i.e
And->
Wherein m is i Representing the feature dimension of the i-th set of training samples, n representing the total number of training samples, n k Represent the number of training samples of the kth class anddefining the projection direction of the training sample of the i-th group as +.>
Step 2) calculating the cross covariance matrix of the training samples of the ith and jth groupsAnd intra-class scatter matrix-> Calculated from the following formula:
where p (k) represents the prior probability of the kth class of samples,a mean vector representing a kth training sample in the ith group;
step 3) for the cross covariance matrix S obtained in step 2) ij Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and a matrix is scattered in classPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 4) given fractional order parameters alpha and beta, re-estimating the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order cross covariance matrixAnd fractional order intra-class scatter matrix>
Step 5) constructing an optimization model of FLMCCA asWherein the method comprises the steps ofThe above-described generalized eigenvalue problem eω=λfω of the optimization model can be obtained by using the lagrangian multiplier method, whereby the projection direction ω is found, where λ is the generalized eigenvalue,
step 6) taking small sample problems into consideration, introducing regularization parameter eta based on step 5), wherein the optimization model can be defined asUsing lagrangian multiplier method, we get the following generalized eigenvalue problem:
wherein the method comprises the steps ofIs of size m i ×m i I=1, 2, …, P;
step 7) solving the feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in step 6), thereby generating a projection matrix W of each set of data i =[ω i1i2 ,…,ω id ],i=1,2,…,P,d≤min{m 1 ,…,m P };
Step 8) projection matrix W using each set of data i Respectively calculating low-dimensional projections of each group of training samples and test samples, and then adopting a serial characteristic fusion strategyAnd forming fusion characteristics finally used for classification, and calculating the recognition rate.
Further, the cross covariance matrix S of step 3) ij Singular value decomposition and intra-class dispersion matrixThe characteristic value decomposition comprises the following steps:
step 3-1) cross covariance matrix S ij Singular value decomposition:
wherein the method comprises the steps ofAnd->Respectively S ij Is a matrix of left and right singular vectors of (c),is S ij Diagonal matrix of singular values of (c), and r ij =rank(S ij );
Step 3-2) vs. intra-class scatter matrixAnd (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs->Feature vector matrix, ">Is->Diagonal matrix of eigenvalues of (2), and +.>
Further, reconstructing the fractional order cross covariance matrix of step 4)And fractional order intra-class scatter matrixThe method comprises the following steps:
step 4-1) assuming that α is a fraction and satisfying 0.ltoreq.α.ltoreq.1, defining a fractional order cross covariance matrix
Wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1);
step 4-2) assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1, defining a fractional order intra-class scatter matrix
Wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
Compared with the prior art, the invention has the beneficial effects that: the invention combines the fractional order embedded CCA and the label multiple set CCA based on the CCA method, and reconstructs an intra-class walk matrix and a cross covariance matrix of the LMCCA by introducing fractional order parameters, thereby reducing the true value deviation caused by noise interference and limited training samples, enhancing the discrimination of the extracted low-dimensional characteristics and improving the accuracy of system identification. Meanwhile, the method provided by the invention can fully utilize the class label information of the training sample, is a supervised learning method, and can solve the problem of information fusion of more than two views; the method has wider application range and better recognition accuracy.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a line graph of the invention and other methods as a function of dimension.
FIG. 3 is a graph of recognition rates for different numbers of training samples according to the present invention.
Detailed Description
As shown in fig. 1, a method for analyzing a discriminant multiple set representative correlation based on fractional order spectrum comprises the following steps:
step 1) given P sets of zero mean training samples and each set of training samples has class c, i.e
And->
Wherein m is i Representing the feature dimension of the i-th set of training samples, n representing the total number of training samples, n k Represent the number of training samples of the kth class anddefinition of group iThe projection direction of the training sample is +.>
Step 2) calculating the cross covariance matrix of the training samples of the ith and jth groupsAnd intra-class scatter matrix-> Calculated from the following formula:
where p (k) represents the prior probability of the kth class of samples,a mean vector representing a kth training sample in the ith group;
step 3) for the cross covariance matrix S obtained in step 2) ij Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and a matrix is scattered in classPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) cross covariance matrix S ij Singular value decomposition:
wherein the method comprises the steps ofAnd->Respectively S ij Is a matrix of left and right singular vectors of (c),is S ij Diagonal matrix of singular values of (c), and r ij =rank(S ij );
Step 3-2) vs. intra-class scatter matrixAnd (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs->Feature vector matrix, ">Is->Diagonal matrix of eigenvalues of (2), and +.>
Step 4) given fractional order parameters alpha and beta, re-estimating the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order cross covariance matrixAnd fractional order intra-class scatter matrix>
Step 4-1) assuming that α is a fraction and satisfying 0.ltoreq.α.ltoreq.1, defining a fractional order cross covariance matrix
Wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1);
step 4-2) assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1, defining a fractional order intra-class scatter matrix
Wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
Step 5) constructing an optimization model of FLMCCA asWherein the method comprises the steps ofThe above-described generalized eigenvalue problem eω=λfω of the optimization model can be obtained by using the lagrangian multiplier method, whereby the projection direction ω is found, where λ is the generalized eigenvalue,
step 6) taking small sample problems into consideration, introducing regularization parameter eta based on step 5), wherein the optimization model can be defined asUsing lagrangian multiplier method, we get the following generalized eigenvalue problem:
wherein the method comprises the steps ofIs of size m i ×m i I=1, 2, …, P;
step 7) solving the feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in step 6), thereby generating a projection matrix W of each set of data i =[ω i1i2 ,…,ω id ],i=1,2,…,P,d≤min{m 1 ,…,m P };
Step 8) projection matrix W using each set of data i And respectively calculating the low-dimensional projection of each group of training samples and test samples, then adopting a serial characteristic fusion strategy to form fusion characteristics finally used for classification, and calculating the recognition rate.
The invention is further illustrated by the following examples: taking an AR face database as an example, the image database contains 120 face images, 14 images each. In the experiment, 8 images of each person are randomly selected as a training set, and the other 6 images are selected as a test set. Three different features were chosen as three view data for FMLCCA, namely: the feature 1 is an original face image, the feature 2 is face data after median filtering, the feature 3 is face data after mean filtering, and the dimensions of the three features are reduced by using principal component analysis, so that final three-view data are formed.
Step 1) 3 sets of zero mean training samples are given and each set of training samples has 120 classes, i.e
And->
Wherein m is i Representing the dimension of the training sample of the i group, defining the projection direction of the training sample of the i group as
Step 2) calculating the cross covariance matrix of the training samples of the ith and jth groupsAnd intra-class scatter matrix-> Calculated from the following formula:
wherein the method comprises the steps ofA mean vector representing a kth training sample in the ith group;
step 3) for the cross covariance matrix S obtained in step 2) ij Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and a matrix is scattered in classPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) cross covariance matrix S ij Singular value decomposition:
Wherein the method comprises the steps ofAnd->Respectively S ij Is a matrix of left and right singular vectors of (c),is S ij Diagonal matrix of singular values of (c), and r ij =rank(S ij );
Step 3-2) vs. intra-class scatter matrixAnd (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs->Feature vector matrix, ">Is->Diagonal matrix of eigenvalues of (2), and +.>
Step 4) definition of alpha and betaThe value range of (1, 0.1,0.2, …, 1), selecting proper fractional order parameters alpha and beta, re-estimating the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order cross covariance matrixAnd fractional order intra-class scatter matrix>
Step 4-1) assuming that α is a fraction and satisfying 0.ltoreq.α.ltoreq.1, defining a fractional order cross covariance matrix
Wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1);
step 4-2) assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1, defining a fractional order intra-class scatter matrix
Wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
Step 5) constructing an optimization model of FLMCCA asWherein the method comprises the steps ofThe above-described generalized eigenvalue problem eω=λfω of the optimization model can be obtained by using the lagrangian multiplier method, whereby the projection direction ω is found, where λ is the generalized eigenvalue,
step 6) taking the problem of small samples into consideration, introducing regularization parameter eta on the basis of step 5), wherein the value range is {10 ] -5 ,10 -4 …,10}. The optimization model can be defined asUsing lagrangian multiplier method, we get the following generalized eigenvalue problem:
wherein the method comprises the steps ofIs of size m i ×m i I=1, 2,3;
step 7) solving the feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in step 6), thereby generating a projection matrix W of each set of data i =[ω i1i2 ,…,ω id ],i=1,2,…,P,d≤min{m 1 ,…,m P };
Step 8) projection matrix W using each set of data i Respectively calculating the low-dimensional projection of each group of training samples and test samples; then adopting a serial feature fusion strategy to form fusion features finally used for classification, using a nearest neighbor classifier for classification, randomly executing 10 independent experiments, and calculating average recognition rate, wherein the results are shown in table 1 and fig. 2, and the BASELINE method refers to serial connectionThree features. Compared with MCCA, CCA, BASELINE, FLMCCA has label-like information, and a better recognition result is obtained. Compared with an LMCCA method, the FLMCCA method introduces fractional order thinking to correct true value deviation caused by factors such as noise interference and the like, obtains low-dimensional characteristics with stronger discriminant and has better recognition result.
Table 1 average recognition rate for different methods on AR dataset
In order to test the influence of the number of training samples on the recognition rate, the invention fixes parameters alpha, beta and eta, respectively selects 2,3, … and 12 face images in front of each person as training sets, and the rest face images as test sets, wherein the trend of the recognition result along with the change of the number of training samples is shown in figure 3. As can be seen from fig. 3, the FLMCCA method is generally superior to the LMCCA method, and a better recognition result is obtained.
In summary, the invention provides a method for identifying multiple sets of typical correlation analysis based on fractional order spectrum, and the FLMCCA method has better identification effect when the number of training samples is smaller; FLMCCA is suitable for dimension reduction and multi-view feature fusion. Because the fractional order thought is introduced and the class label information is considered, the identification result of the FLMCCA method is better than that of other methods in the same class method.
The invention is not limited to the above embodiments, and based on the technical solution disclosed in the invention, a person skilled in the art may make some substitutions and modifications to some technical features thereof without creative effort according to the technical content disclosed, and all the substitutions and modifications are within the protection scope of the invention.

Claims (3)

1. The method for analyzing the typical correlation of the discriminant multiple sets based on the fractional order spectrum is characterized by comprising the following steps:
step 1) the P groups of zero-mean training samples are given as face data and each group of training samples has c types, namely
And->
Wherein m is i Representing the feature dimension of the i-th set of training samples, n representing the total number of training samples, n k Represent the number of training samples of the kth class anddefining the projection direction of the training sample of the i-th group as +.>
Step 2) calculating the cross covariance matrix of the training samples of the ith and jth groupsAnd intra-class scatter matrix Calculated from the following formula:
where p (k) represents the prior probability of the kth class of samples,mean direction of the kth training sample in the ith groupAn amount of;
step 3) for the cross covariance matrix S obtained in step 2) ij Singular value decomposition is carried out to obtain a left singular vector matrix, a right singular vector matrix and a singular value matrix, and a matrix is scattered in classPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 4) given fractional order parameters alpha and beta, re-estimating the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order cross covariance matrixAnd fractional order intra-class scatter matrix>
Step 5) constructing an optimization model of FLMCCA asWherein->Obtaining the generalized eigenvalue problem Eω=λFω of the optimization model by using Lagrangian multiplier method, thereby obtaining the projection direction ω, wherein λ is the generalized eigenvalue,
step 6) taking small sample problems into consideration, introducing regularization parameter eta based on step 5), wherein the optimization model can be defined asUsing lagrangian multiplier method, we get the following generalized eigenvalue problem:
wherein the method comprises the steps ofIs of size m i ×m i I=1, 2, …, P;
step 7) solving the feature vectors corresponding to the first d maximum feature values according to the generalized feature value problem in step 6), thereby generating a projection matrix W of each set of data i =[ω i1i2 ,…,ω id ],i=1,2,…,P,d≤min{m 1 ,…,m P };
Step 8) projection matrix W using each set of data i And respectively calculating the low-dimensional projection of each group of training samples and test samples, then adopting a serial characteristic fusion strategy to form fusion characteristics finally used for classification, and calculating the recognition rate.
2. The method for analysis of canonical correlation of discriminant multiple sets based on fractional order spectra of claim 1, wherein the cross covariance matrix S of step 3) is ij Singular value decomposition and intra-class dispersion matrixThe characteristic value decomposition comprises the following steps:
step 3-1) cross covariance matrix S ij Singular value decomposition:
wherein the method comprises the steps ofAnd->Respectively S ij Is a matrix of left and right singular vectors of (c),is S ij Diagonal matrix of singular values of (c), and r ij =rank(S ij );
Step 3-2) vs. intra-class scatter matrixAnd (3) performing eigenvalue decomposition:
wherein the method comprises the steps ofIs->Feature vector matrix, ">Is->Diagonal matrix of eigenvalues of (2), and +.>
3. The method for analysis of discriminant multiple sets of canonical correlation based on fractional order spectra according to claim 1 or 2, wherein the constructing of the fractional order cross covariance matrix of step 4) includesAnd divideNumerical order intra-class scatter matrix>The method comprises the following steps:
step 4-1) assuming that α is a fraction and satisfying 0.ltoreq.α.ltoreq.1, defining a fractional order cross covariance matrix
Wherein the method comprises the steps ofU ij And V ij R ij The definition is given in step 3-1);
step 4-2) assuming that β is a fraction and satisfying 0.ltoreq.β.ltoreq.1, defining a fractional order intra-class scatter matrix
Wherein the method comprises the steps ofQ i And r i The definition of (2) is given in step 3-2).
CN202110235175.0A 2020-11-20 2021-03-03 Discrimination multiple set typical correlation analysis method based on fractional order spectrum Active CN112966734B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011307393 2020-11-20
CN2020113073932 2020-11-20

Publications (2)

Publication Number Publication Date
CN112966734A CN112966734A (en) 2021-06-15
CN112966734B true CN112966734B (en) 2023-09-15

Family

ID=76276281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110235175.0A Active CN112966734B (en) 2020-11-20 2021-03-03 Discrimination multiple set typical correlation analysis method based on fractional order spectrum

Country Status (1)

Country Link
CN (1) CN112966734B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494779B (en) * 2022-01-26 2024-01-23 金陵科技学院 Tea near infrared spectrum classification method with improved discrimination conversion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6137909A (en) * 1995-06-30 2000-10-24 The United States Of America As Represented By The Secretary Of The Navy System and method for feature set reduction
WO2013159356A1 (en) * 2012-04-28 2013-10-31 中国科学院自动化研究所 Cross-media searching method based on discrimination correlation analysis
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion
CN107480623A (en) * 2017-08-07 2017-12-15 西安电子科技大学 The neighbour represented based on cooperation keeps face identification method
CN109241813A (en) * 2017-10-17 2019-01-18 南京工程学院 The sparse holding embedding grammar of differentiation for unconstrained recognition of face

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6137909A (en) * 1995-06-30 2000-10-24 The United States Of America As Represented By The Secretary Of The Navy System and method for feature set reduction
WO2013159356A1 (en) * 2012-04-28 2013-10-31 中国科学院自动化研究所 Cross-media searching method based on discrimination correlation analysis
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion
CN107480623A (en) * 2017-08-07 2017-12-15 西安电子科技大学 The neighbour represented based on cooperation keeps face identification method
CN109241813A (en) * 2017-10-17 2019-01-18 南京工程学院 The sparse holding embedding grammar of differentiation for unconstrained recognition of face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于模块度和标签传递的推荐算法;盛俊等;计算机应用(第09期);全文 *

Also Published As

Publication number Publication date
CN112966734A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN107085716B (en) Cross-view gait recognition method based on multi-task generation countermeasure network
CN105913025B (en) A kind of deep learning face identification method based on multi-feature fusion
CN105469034B (en) Face identification method based on Weighting type distinctive sparse constraint Non-negative Matrix Factorization
He et al. Detecting the number of clusters in n-way probabilistic clustering
CN100426314C (en) Feature classification based multiple classifiers combined people face recognition method
CN107633513A (en) The measure of 3D rendering quality based on deep learning
CN100410963C (en) Two-dimensional linear discrimination human face analysis identificating method based on interblock correlation
CN107578007A (en) A kind of deep learning face identification method based on multi-feature fusion
CN106169073A (en) A kind of expression recognition method and system
Shrivastava et al. Learning discriminative dictionaries with partially labeled data
CN107220627B (en) Multi-pose face recognition method based on collaborative fuzzy mean discrimination analysis
CN103065158B (en) The behavior recognition methods of the ISA model based on relative gradient
US20050180639A1 (en) Iterative fisher linear discriminant analysis
CN108875655A (en) A kind of real-time target video tracing method and system based on multiple features
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN105844291A (en) Characteristic fusion method based on kernel typical correlation analysis
Sun et al. [Retracted] Research on Face Recognition Algorithm Based on Image Processing
CN107368803A (en) A kind of face identification method and system based on classification rarefaction representation
CN104966075A (en) Face recognition method and system based on two-dimensional discriminant features
CN112966735B (en) Method for fusing supervision multi-set related features based on spectrum reconstruction
CN112966734B (en) Discrimination multiple set typical correlation analysis method based on fractional order spectrum
Garg et al. Facial expression recognition & classification using hybridization of ICA, GA, and neural network for human-computer interaction
Wang et al. Open set classification of gan-based image manipulations via a vit-based hybrid architecture
Kobayashi Generalized mutual subspace based methods for image set classification
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant