CN107423767B - Multi-angle of view recognition methods based on regularization figure - Google Patents

Multi-angle of view recognition methods based on regularization figure Download PDF

Info

Publication number
CN107423767B
CN107423767B CN201710644457.XA CN201710644457A CN107423767B CN 107423767 B CN107423767 B CN 107423767B CN 201710644457 A CN201710644457 A CN 201710644457A CN 107423767 B CN107423767 B CN 107423767B
Authority
CN
China
Prior art keywords
matrix
training sample
indicate
regularization
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710644457.XA
Other languages
Chinese (zh)
Other versions
CN107423767A (en
Inventor
王磊
陈爽月
姬红兵
李丹萍
李苗
赵杰
刘璐
臧伟浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201710644457.XA priority Critical patent/CN107423767B/en
Publication of CN107423767A publication Critical patent/CN107423767A/en
Application granted granted Critical
Publication of CN107423767B publication Critical patent/CN107423767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A kind of multi-angle of view recognition methods based on regularization figure, realize step: 1. arbitrarily extract a visual angle characteristic from multi-angle of view database;2. being pre-processed;3. constructing regularization figure;4. calculating the Laplacian Matrix of regularization figure;5. calculating the Scatter Matrix of training sample;6. judging whether to have chosen visual angle characteristic all in extracted visual angle characteristic, if so, thening follow the steps 7, otherwise, step 1 is executed;7. calculating relevance values;8. calculating the visual angle characteristic after the projection combination of two of all training samples;9. carrying out identification classification to the visual angle characteristic after projection combination of two using nearest neighbor classifier.The present invention using the local discriminant information between multi-angle of view characteristic information and single visual angle feature class, can improve nicety of grading, so that more accurate to the identification of multi-angle of view feature in a sample there are in the case where multiple visual angles.

Description

Multi-angle of view recognition methods based on regularization figure
Technical field
The invention belongs to technical field of image processing, further relate to one mode identification and machine learning techniques field One of the multi-angle of view recognition methods based on regularization figure.The present invention be based on regularization figure (Regularized Graph, RG multi-angle of view recognition methods), the present invention can be used for pedestrian's identification, text identification and recognition of face.
Background technique
With the development of information technology and sensor technology, not by available to one sample of different sensors Same feature, multi-angle of view identification become a big hot issue of pattern-recognition and machine learning field.A large number of studies show that single view The information that corner characteristics extract in some applications not as good as combination of multiple features is complete, deeply excavates the potential letter between multiple visual angle characteristics Breath, by the mutual promotion between visual angle characteristic, can more fully obtain the information of sample.Multi-angle of view identification main task be According to the different perspectives feature of the same target sample of extraction, the target in itself and database is compared, is determined to be identified Target.Currently, carrying out multi-angle of view to know method for distinguishing being the method based on feature extraction to target.
Multi-angle of view recognition methods based on feature extraction needs to carry out target feature extraction and selection, and Feature Dimension Reduction is special Levy a kind of form of expression of selection.In the case of a single feature, common Data Dimensionality Reduction technology include: principal component analysis PCA, Locality preserving projections LPP, linear discriminant analysis LDA, edge discriminant analysis MFA etc., can be with by the feature after above method dimensionality reduction The classifier as arest neighbors is identified.In the case of multi-angle of view feature, need feature combination of two, classical group two-by-two The method for closing feature has canonical correlation analysis CCA, distinctive canonical correlation analysis DCCA.DCCA method is by considering feature two-by-two Between class between class correlation, the classification information being extracted between two different characteristics, but its deficiency is not examine Consider the discriminant information of same feature.
Shandong University is " a kind of based on the sparse low resolution face for keeping canonical correlation analysis in the patent document of its application It is disclosed in recognition methods " (application number: CN201610473709.2 application publication number: CN106203256A) a kind of based on sparse guarantor Hold the low resolution face identification method of canonical correlation analysis.This method includes training part and part of detecting.Training part Comprise the concrete steps that: the first step extracts the validity feature of high-resolution and low-resolution face image by principal component analytical method And principal component analysis projection matrix corresponding to high-resolution and low-resolution face image is obtained respectively.Second step constructs dilute Weight matrix is dredged, keeps sparse reconstructed error minimum, in conjunction with canonical correlation analysis CCA, so that high-low resolution face image data Between correlation maximum, low resolution and high-resolution training sample set are projected in same subspace.Part of detecting it is specific Step is: the first step, and the principal component analysis projection matrix for the low resolution got using training department is to low resolution to be measured Facial image sample carries out preliminary feature extraction.Second step, the low resolution that sparse holding Canonical Correlation Analysis is obtained The projection matrix of training sample set maps the feature that upper step principal component analysis is extracted.Finally, passing through nearest neighbor classifier Classification and Identification is carried out to the sample after mapping.Shortcoming existing for this method is: when two features of fusion, due to not accounting for The mark information of sample can not extract the classification information between feature two-by-two, cause preferably realize target classification.
Paper " the Multi-view uncorrelated that Sun, S., Xie, X., Yang, M are delivered at it discriminant analysis”(IEEE transactions on cybernetics),46(12),3272-3284 (2016) a kind of multi-angle of view recognition methods for merging linear discriminant analysis LDA and canonical correlation analysis CCA is proposed in.This method A single visual angle characteristic model is established first with LDA, by the ratio for maximizing Scatter Matrix in class scatter matrix and class Rate is gone on the data projection to lower dimensional space higher dimensional space;Then, the correlation of two different perspectives features is extracted by CCA Information;Finally, by the way that the relevant information of different perspectives feature and the discriminant information of same visual angle characteristic are maximized simultaneously, to ask Best projection matrix is taken, the sample characteristics for projecting subspace are obtained into recognition result with classifier.Deficiency existing for this method Place is the local discriminant information having ignored in same visual angle characteristic between sample class, and multi-angle of view accuracy of identification is caused to have loss.
Summary of the invention
It is an object of the invention to overcome the shortcomings of above-mentioned prior art, propose that a kind of multi-angle of view based on regularization figure is known Other method.The present invention can utilize multi-angle of view characteristic information and single visual angle spy in a sample there are in the case where multiple visual angles The local discriminant information between class is levied, the identification to multi-angle of view feature may be implemented.
Realizing concrete thought of the invention is: extracting multiple visual angle characteristics from multi-angle of view database, is extracting multiple views An optional visual angle characteristic on the basis of corner characteristics, using conventional normalized and Principal Component Analysis to selected visual angle characteristic It is pre-processed, characteristic after being pre-processed, constructs regularization figure for characteristic after pretreatment as training sample, it is right The Laplacian Matrix of regularization figure calculates, and traverses to extracted multiple visual angle characteristics, after the completion of traversal, trains all two-by-two Relevance values between sample are calculated, and the projection of training sample is obtained using the multi-angle of view recognition methods based on regularization figure Visual angle characteristic after combination of two.Classified to the visual angle characteristic after projection combination of two using nearest neighbor classifier, thus Achieve the purpose that multi-angle of view identifies.
The present invention realizes that specific step is as follows:
(1) multi-angle of view characteristic is extracted:
Arbitrarily extracted from multi-angle of view database it is a kind of include m class sample data visual angle characteristic;
(2) it is pre-processed:
Selected visual angle characteristic is generated a data matrix by (2a), is normalized, is obtained to the data matrix of generation Data matrix after to normalization;
(2b) uses Principal Component Analysis PCA, and the data matrix covariance after calculating normalization obtains covariance matrix;
(2c) utilizes singular value decomposition method, and covariance matrix is carried out Eigenvalues Decomposition, extracts and keeps 99% energy feature It is worth corresponding all feature vectors as training sample;
(3) regularization figure is constructed:
(3a) utilizes nearest neighbor method, in every a kind of training sample, the arest neighbors of each training sample is found, by each instruction Practice sample and be set as 1/k with the weight that its nearest samples connect side, k indicates the number of nearest samples, by each trained sample This constitutes the intrinsic figure in regularization figure after connecting with its nearest samples;
(3b) finds nearest samples between the class of each training sample in training sample using nearest neighbor method, will be each The weight on nearest samples connection side is set as 1/h between training sample and its class, and h indicates the number of nearest samples between class, will The part punishment figure in regularization figure is constituted between each training sample and its class after nearest samples connection;
(3c) is set as 1/ in training sample, by the weight on training sample connection side between each training sample and its class (n-nc), n indicates the number of all training samples, ncIndicate it is selected it is a kind of include m class sample data visual angle characteristic in The training sample number of classification where each training sample itself, c indicate sample class, c=1,2 ..., m, by each training The global punishment figure in regularization figure is constituted between sample and its class after training sample connection;
(4) Laplacian Matrix of regularization figure is calculated:
(4a) calculates Laplce's square of intrinsic figure in corresponding regularization figure using the weight matrix on the connection side of intrinsic figure Battle array;
(4b) calculates the drawing that figure is locally punished in corresponding regularization figure using the weight matrix on the connection side of part punishment figure This matrix of pula;
(4c) calculates the drawing of global punishment figure in corresponding regularization figure using the weight matrix on the connection side of global punishment figure This matrix of pula;
(5) Scatter Matrix of training sample is calculated:
(5a) utilizes class scatter formula, calculates the class scatter matrix of training sample;
(5b) calculates the global Scatter Matrix of training sample using global divergence formula;
(6) judge whether to have chosen visual angle characteristic all in extracted visual angle characteristic, if so, (7) are thened follow the steps, Otherwise, step (1) is executed;
(7) distinctive canonical correlation analysis formula is utilized, the relevance values between training sample two-by-two are calculated;
(8) visual angle characteristic after calculating the projection combination of two of training sample:
(8a) identifies formula using the multi-angle of view based on regularization figure, calculates the projection vector of training sample;
(8b) utilizes the combinatorial formula of projection vector, the visual angle characteristic after calculating projection combination of two;
(9) identification classification:
Using nearest neighbor classifier, identification classification is carried out to the visual angle characteristic after projection combination of two.
Compared with prior art, the present invention having the advantage that
First, since the locally arest neighbors side of punishment figure is utilized in building regularization figure in the present invention in multi-angle of view identification Method is arranged in regularization figure the weight on the connection side of locally punishment figure, overcomes the prior art due to a lack of in same visual angle characteristic Local discriminant information between sample class and cause multi-angle of view accuracy of identification can lossy deficiency so that the present invention is to multi-angle of view spy Single visual angle identification in sign obtains higher discrimination.
Second, it is calculated between all training samples two-by-two since present invention employs distinctive Canonical Correlation Analysis Relevance values, overcome the prior art and lead to not extract two-by-two between training sample because not accounting for the mark information of sample Classification information deficiency, enable the present invention preferably to multi-angle of view feature realize classify.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is the classification accuracy of the invention between two features of PIX and MOR with dimension variation diagram.
Specific implementation measure
Invention is described further with reference to the accompanying drawing.
It is described as follows in conjunction with 1 pair of specific steps of the invention of attached drawing:
Step 1, multi-angle of view characteristic is extracted.
Arbitrarily extracted from multi-angle of view database it is a kind of include m class sample data visual angle characteristic.
Step 2, it is pre-processed.
Selected visual angle characteristic is generated into a data matrix, the data matrix of generation is normalized, is returned Data matrix after one change.
Using Principal Component Analysis PCA, the data matrix covariance after normalizing is calculated, covariance matrix is obtained.
Using singular value decomposition method, covariance matrix is subjected to Eigenvalues Decomposition, extracts and keeps 99% energy eigenvalue pair All feature vectors answered are as training sample.
Step 3, regularization figure is constructed.
The arest neighbors of each training sample is found, by each trained sample in every a kind of training sample using nearest neighbor method This is set as 1/k with the weight that its nearest samples connect side, and k indicates the number of nearest samples, by each training sample and The intrinsic figure in regularization figure is constituted after the connection of its nearest samples.
Using nearest neighbor method, in training sample, nearest samples between the class of each training sample are found, by each training The weight on nearest samples connection side is set as 1/h between sample and its class, and h indicates the number of nearest samples between class, will be each The part punishment figure in regularization figure is constituted between training sample and its class after nearest samples connection.
In training sample, 1/ (n- is set by the weight on training sample connection side between each training sample and its class nc), n indicates the number of all training samples, ncIndicate it is selected it is a kind of include m class sample data visual angle characteristic in it is every The training sample number of classification where a training sample itself, c indicate sample class, c=1,2 ..., m, by each trained sample This global punishment constituted after training sample connection in regularization figure between its class is schemed.
Step 4, the Laplacian Matrix of regularization figure is calculated.
According to the following formula, using the weight matrix on the connection side of intrinsic figure, the La Pu of intrinsic figure in corresponding regularization figure is calculated Lars matrix:
Lw=Dw-W
Wherein, LwIndicate the Laplacian Matrix of intrinsic figure in corresponding regularization figure, DwIndicate a diagonal entry by right The diagonal matrix that vector forms after the row or column summation of the weight matrix W on intrinsic figure connection side, W indicate the connection side of intrinsic figure Weight matrix.
According to the following formula, it using the weight matrix on the connection side of part punishment figure, calculates and is locally punished in corresponding regularization figure The Laplacian Matrix of figure:
Lh=Dh-H
Wherein, LhIndicate the Laplacian Matrix that figure is locally punished in corresponding regularization figure, DhIndicate a diagonal entry The diagonal matrix being made of vector after the row or column summation of the weight matrix H on the connection side to local punishment figure, H indicate that part is punished Penalize the weight matrix on the connection side of figure.
According to the following formula, using the weight matrix on the connection side of global punishment figure, global punishment in corresponding regularization figure is calculated The Laplacian Matrix of figure:
Lf=Df-F
Wherein, LfIndicate the Laplacian Matrix of global punishment figure in corresponding regularization figure, DfIndicate a diagonal entry The diagonal matrix being made of vector after the row or column summation of the weight matrix F on the connection side to global punishment figure, F indicate that the overall situation is punished Penalize the weight matrix on the connection side of figure.
Step 5, the Scatter Matrix of training sample is calculated.
Class scatter formula according to the following formula calculates the class scatter matrix of training sample.
Sb=t*XLhXT+(1-t)*XLfXT
Wherein, SbIndicate that the class scatter matrix of training sample, t indicate the balance being randomly provided between [0,1] part The punishment figure parameter shared with global punishment figure, * indicate multiplication operations, and X indicates that training sample, T indicate transposition operation, LfIt indicates The Laplacian Matrix of global punishment figure, L in corresponding regularization figurehIndicate the La Pula that figure is locally punished in corresponding regularization figure This matrix.
Global divergence formula according to the following formula, calculates the global Scatter Matrix of training sample.
St=XLwXT+Sb
Wherein, StIndicate the global Scatter Matrix of training sample, LwIndicate the Laplce of intrinsic figure in corresponding regularization figure Matrix, SbIndicate that the class scatter matrix of training sample, X indicate training sample.
Step 6, judge whether to have chosen visual angle characteristic all in extracted visual angle characteristic, if so, thening follow the steps 7, otherwise, execute step 1.
Step 7, distinctive canonical correlation analysis formula according to the following formula calculates the correlation between training sample two-by-two Value.
Wherein,Indicate q-th of training sample XqWith r-th of training sample XrBetween relevance values, q=1, The number of 2 ..., p, r=1,2 ..., p, p expression visual angle characteristic, the matrix-block that A expression one is 1 by element is as diagonal element The matrix of element, the matrix-block is by m nc×ncMatrix composition, m indicate training sample classification sum, ncIndicate that c class is instructed Practice number of samples, c=1,2 ..., m.
Step 8, the visual angle characteristic after calculating the projection combination of two of training sample.
The multi-angle of view based on regularization figure according to the following formula identifies formula, calculates the projection vector of training sample.
s.t.
Wherein, max expression is maximized operation,Indicate q-th of training sample XqProjection vector,Indicate r A training sample XrProjection vector,Indicate q-th of training sample XqClass scatter matrix,Indicate r-th of training Sample XrClass scatter matrix, γ indicates information and training between the class of balance training sample being randomly provided between [0,1] The parameter of similitude between sample, s.t. indicate constraint condition,Indicate q-th of training sample XqGlobal Scatter Matrix,Indicate r-th of training sample XrGlobal Scatter Matrix.
The combinatorial formula of projection vector according to the following formula, the visual angle characteristic after calculating projection combination of two.
Wherein, Z indicates the visual angle characteristic after projection combination of two, XqIndicate q-th of training sample, XrIndicate r-th of training Sample,Indicate q-th of training sample XqProjection vector,Indicate r-th of training sample XrProjection vector.
Step 9, identification classification.
Using nearest neighbor classifier, identification classification is carried out to the visual angle characteristic after projection combination of two.
Effect of the invention can be further illustrated by following emulation experiment.
1. emulation experiment condition:
Emulation experiment of the invention is carried out on the computer of Hp Compaq 6280Pro MT PC, 4G memory, is answered It is realized with MATLAB 2010a software.
Test object is Handwritten database, Caltech101-7 database and Reuters database.
The Handwritten database is provided by UCI machine learning databases, which is by 0~9 number It constitutes, in total includes 2000 width by digitized binaryzation picture, every class has 200 width images.These images contain six groups from The numerical characteristic of different perspectives description.This six groups of features respectively include the Fourier coefficient of the character shape of 76 dimensions, the money of 216 dimensions Expect relative coefficient, 64 dimension karhunen K-L coefficients, 240 dimension with 2 × 3 for window acquisition mean pixels, 47 dimension The morphological character of Zernike square and 6 dimensions.
The Caltech101-7 database be by being provided from 101 class image data base of California Institute of Technology, The present invention selects data set of common 7 class as emulation experiment, including face, motorcycle, U.S. dollar, cartoon figure, Snoopy, Stop sign and arm-chair, this part being selected are called Caltech101-7.In this image set, 5 feature quilts It extracts.This five features include the Gabor wavelet feature of 48 dimensions, the wavelet moment of 40 dimensions, the central feature of 254 dimensions, 1984 dimensions Histogram feature, the GIST feature of 512 dimensions, the LBP feature of 928 dimensions.
Reuters database is the multiple features text set provided by Reuter, including by 5 kinds of different language (English, method Language, German, Spanish and Italian) and the file write as of their translation, subset that the present invention selects English to write and The translation of other four kinds of language.
2. emulation experiment content:
Emulation experiment 1: using the Canonical Correlation Analysis CCA of the present invention and the prior art, distinctive canonical correlation point Analysis method DCCA, multi-angle of view linear discriminant analysis method MLDA and multi-angle of view uncorrelated discriminant analysis method MULDA are right respectively Handwritten database, Caltech101-7 database, Reuters database are emulated, and all visual angle characteristics are sought Average recognition rate.
In Handwritten database, 10 samples for randomly selecting every one kind form markd training sample set. In Caltech101-7 database, 10 samples for randomly selecting every one kind form markd training sample set.In In Reuters database, 20 samples for randomly selecting every one kind form markd training sample set.
The present invention is respectively adopted with CCA, DCCA, MLDA and MULDA method in database in emulation experiment of the invention Feature samples carry out Classification and Identification, the sample matrix Y after obtaining dimensionality reduction two-by-twor, matrix YrSize be n × r, r=m-1, pass through 1 Nearest Neighbor Classifier is to matrix YrIn each row vector classify, obtain estimation category, calculate matrix YrClassification accuracy, will The discrimination of obtained feature samples two-by-two carries out sum-average arithmetic operation, obtains the average recognition rate of all visual angle characteristics.
For emulation experiment of the invention during identifying to Handwritten database sample, parameter selection is such as Under:
The parameter t=0.9 of balance part punishment figure and global punishment figure, balances information and two between the class of each visual angle characteristic Parameter γ=0.1 of similitude between a visual angle.
For emulation experiment of the invention during identifying to Caltech101-7 database sample, parameter selection is such as Under:
The parameter t=0.8 of balance part punishment figure and global punishment figure, balances information and two between the class of each visual angle characteristic Parameter γ=0.01 of similitude between a visual angle.
For emulation experiment of the invention during identifying to Reuters database sample, parameter selection is as follows:
The parameter t=0.9 of balance part punishment figure and global punishment figure, balances information and two between the class of each visual angle characteristic Parameter γ=0.5 of similitude between a visual angle.
The simulation experiment result of the invention is taken 20 times averagely, the average recognition rate result such as table 1 on three databases It is shown.
Average recognition rate of the table 1. on Handwritten, Caltech101-7, Reuters database
Handwritten Caltech101-7 Reuters
CCA 0.7529 0.4296 0.3650
DCCA 0.8334 0.5929 0.5400
MLDA 0.9212 0.7967 0.7010
MULDA 0.9182 0.7910 0.7000
The present invention 0.9411 0.8090 0.7160
As seen from Table 1, average recognition rate of the present invention on Handwritten database is 0.9411, In Average recognition rate on Caltech101-7 database is 0.8090, and the average recognition rate on Reuters database is 0.7160, thus obtained conclusion is that average recognition rate of the present invention on these three multi-angle of view databases will be higher than existing The discrimination of method.
Emulation experiment 2: using the Canonical Correlation Analysis CCA of the present invention and the prior art, distinctive canonical correlation point Analysis method DCCA, multi-angle of view linear discriminant analysis method MLDA, and MULDA pairs of multi-angle of view uncorrelated discriminant analysis method Reuters database is emulated, and the discrimination between visual angle characteristic two-by-two is sought.
20 samples that emulation experiment of the invention randomly selects every one kind to Reuters database form markd instruction Practice sample set, the present invention is respectively adopted with CCA, DCCA, MLDA, MULDA method to feature samples two-by-two in Reuters database Carry out Classification and Identification, the sample matrix Y after obtaining dimensionality reductionr, matrix YrSize be n × r, r=m-1, pass through 1 Nearest Neighbor Classifier To matrix YrIn each row vector classify, obtain estimation category, calculate matrix YrClassification accuracy.
For emulation experiment of the invention during identifying to Reuters database sample, parameter selection is as follows:
The parameter t=0.9 of balance part punishment figure and global punishment figure, balances information and two between the class of each visual angle characteristic Parameter γ=0.5 of similitude between a visual angle.
The simulation experiment result of the invention is taken 20 times averagely, between the visual angle characteristic two-by-two on Reuters database Discrimination the results are shown in Table 2.
Discrimination of the table 2. on Reuters database between two visual angle characteristics
X Y CCA DCCA MLDA MULDA The present invention
EN FR 0.3680 0.5440 0.6920 0.7040 0.7240
EN GR 0.3360 0.5400 0.6800 0.6800 0.6840
EN IT 0.3800 0.5360 0.7120 0.7040 0.7200
EN SP 0.3760 0.5400 0.7200 0.7120 0.7360
As seen from Table 2, discrimination of the present invention between EN the and FR feature on Reuters database is 0.7240, In Discrimination between EN and GR feature is 0.6840, and the discrimination between EN and IT feature is 0.7200, in EN and SP feature Between discrimination be 0.7360, thus obtain as drawn a conclusion: discrimination of the present invention between visual angle characteristic two-by-two also wants high In the discrimination of existing method.
Emulation experiment 3: using the Canonical Correlation Analysis CCA of the present invention and the prior art, distinctive canonical correlation point Analysis method DCCA and multi-angle of view linear discriminant analysis method MLDA emulate Handwritten database, seek two-by-two The relationship of discrimination and dimension between visual angle characteristic.
10 samples that emulation experiment of the invention randomly selects every one kind to Handwritten database are formed with label Training sample set, the present invention is respectively adopted with CCA, DCCA, MLDA method to feature sample two-by-two in Handwritten database This progress Classification and Identification, the sample matrix Y after obtaining dimensionality reductionr, matrix YrSize be n × r, r=1,2 ..., 10, it is close by 1 Adjacent classifier is to matrix YrIn each row vector classify, obtain estimation category, calculate matrix YrClassification accuracy.
For emulation experiment of the invention during identifying to Handwritten database sample, parameter selection is such as Under:
The parameter t=0.9 of balance part punishment figure and global punishment figure, balances information and two between the class of each visual angle characteristic Parameter γ=0.1 of similitude between a visual angle.
The simulation experiment result of the invention is taken into 20 average, PIX of Fig. 2 (a) expression on Handwritten database And with the Dependence Results of dimension variation, Fig. 2 (b) indicates in Handwritten data the classification accuracy between feature MOR two-by-two PIX and MOR on library two-by-two the classification accuracy between feature with dimension variation histogram results.
In Fig. 2 (a), abscissa representation dimension, ordinate indicates discrimination, indicates CCA method with the curve that " * " is indicated Discrimination indicates DCCA method discrimination with dimension variation curve, with "+" mark with dimension variation curve, with the curve that " o " is indicated The curve shown indicates that MLDA method discrimination with dimension variation curve, indicates that the method for the present invention is known with the curve that " five-pointed star " indicates Rate is not with dimension variation curve.
In Fig. 2 (b), left side histogram indicates that MLDA method discrimination corresponds to the histogram of respective dimension, right side histogram Figure indicates that the method for the present invention discrimination corresponds to the histogram of respective dimension, abscissa representation dimension, and ordinate indicates discrimination.
Available from Fig. 2 (a) and Fig. 2 (b) such as to draw a conclusion: classification of the present invention between visual angle characteristic two-by-two is accurate Rate is also better than other existing methods on being accurate to every dimension, and especially in the case where low-dimensional, effect is more obvious.
The above simulation result shows using the present invention, can effectively promote the recognition effect to multi-angle of view data.

Claims (9)

1. a kind of multi-angle of view recognition methods based on regularization figure, includes the following steps:
(1) multi-angle of view characteristic is extracted:
Arbitrarily extracted from multi-angle of view database it is a kind of include m class sample data visual angle characteristic;
(2) it is pre-processed:
Selected visual angle characteristic is generated a data matrix by (2a), is normalized, is returned to the data matrix of generation Data matrix after one change;
(2b) uses Principal Component Analysis PCA, and the data matrix covariance after calculating normalization obtains covariance matrix;
(2c) utilizes singular value decomposition method, and covariance matrix is carried out Eigenvalues Decomposition, extracts and keeps 99% energy eigenvalue pair All feature vectors answered are as training sample;
(3) regularization figure is constructed:
(3a) utilizes nearest neighbor method, in every a kind of training sample, the arest neighbors of each training sample is found, by each trained sample This is set as 1/k with the weight that its nearest samples connect side, and k indicates the number of nearest samples, by each training sample and The intrinsic figure in regularization figure is constituted after the connection of its nearest samples;
(3b) finds nearest samples between the class of each training sample in training sample using nearest neighbor method, by each training The weight on nearest samples connection side is set as 1/h between sample and its class, and h indicates the number of nearest samples between class, will be each The part punishment figure in regularization figure is constituted between training sample and its class after nearest samples connection;
(3c) is set as 1/ (n-n in training sample, by the weight on training sample connection side between each training sample and its classc), N indicates the number of all training samples, ncIndicate it is selected it is a kind of include m class sample data visual angle characteristic in each instruction Practice sample itself where classification training sample number, c indicate sample class, c=1,2 ..., m, by each training sample with The global punishment figure in regularization figure is constituted between its class after training sample connection;
(4) Laplacian Matrix of regularization figure is calculated:
(4a) calculates the Laplacian Matrix of intrinsic figure in corresponding regularization figure using the weight matrix on the connection side of intrinsic figure;
(4b) calculates the La Pula that figure is locally punished in corresponding regularization figure using the weight matrix on the connection side of part punishment figure This matrix;
(4c) calculates the La Pula of global punishment figure in corresponding regularization figure using the weight matrix on the connection side of global punishment figure This matrix;
(5) Scatter Matrix of training sample is calculated:
(5a) utilizes class scatter formula, calculates the class scatter matrix of training sample;
(5b) calculates the global Scatter Matrix of training sample using global divergence formula;
(6) judge whether to have chosen visual angle characteristic all in extracted visual angle characteristic, if so, (7) are thened follow the steps, it is no Then, step (1) is executed;
(7) distinctive canonical correlation analysis formula is utilized, the relevance values between training sample two-by-two are calculated;
(8) visual angle characteristic after calculating the projection combination of two of training sample:
(8a) identifies formula using the multi-angle of view based on regularization figure, calculates the projection vector of training sample;
(8b) utilizes the combinatorial formula of projection vector, the visual angle characteristic after calculating projection combination of two;
(9) identification classification:
Using nearest neighbor classifier, identification classification is carried out to the visual angle characteristic after projection combination of two.
2. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: sharp in step (4a) With the weight matrix on the connection side of intrinsic figure, the formula for calculating the Laplacian Matrix of intrinsic figure in corresponding regularization figure is as follows:
Lw=Dw-W
Wherein, LwIndicate the Laplacian Matrix of intrinsic figure in corresponding regularization figure, DwIndicate a diagonal entry by intrinsic The diagonal matrix that vector forms after the row or column summation of the weight matrix W on figure connection side, W indicate the weight on the connection side of intrinsic figure Matrix.
3. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: sharp in step (4b) With the weight matrix on the connection side of part punishment figure, the public affairs that the Laplacian Matrix of figure is locally punished in corresponding regularization figure are calculated Formula is as follows:
Lh=Dh-H
Wherein, LhIndicate the Laplacian Matrix that figure is locally punished in corresponding regularization figure, DhIndicate a diagonal entry by right The diagonal matrix that vector forms after the row or column summation of the weight matrix H on the connection side of part punishment figure, H indicate part punishment figure Connection side weight matrix.
4. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: sharp in step (4c) With the weight matrix on the connection side of global punishment figure, the public affairs of the Laplacian Matrix of global punishment figure in corresponding regularization figure are calculated Formula is as follows:
Lf=Df-F
Wherein, LfIndicate the Laplacian Matrix of global punishment figure in corresponding regularization figure, DfIndicate a diagonal entry by right The diagonal matrix that vector forms after the row or column summation of the weight matrix F on the connection side of overall situation punishment figure, F indicate global punishment figure Connection side weight matrix.
5. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: institute in step (5a) The class scatter formula stated is as follows:
Sb=t*XLhXT+(1-t)*XLfXT
Wherein, SbIndicate that the class scatter matrix of training sample, t indicate the balance part punishment figure being randomly provided between [0,1] The shared parameter with global punishment figure, * indicate multiplication operations, and X indicates that training sample, T indicate transposition operation, LfIt indicates to correspond to just Then change the Laplacian Matrix of global punishment figure in figure, LhIndicate Laplce's square that figure is locally punished in corresponding regularization figure Battle array.
6. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: institute in step (5b) The global divergence formula stated is as follows:
St=XLwXT+Sb
Wherein, StIndicate the global Scatter Matrix of training sample, SbIndicate the class scatter matrix of training sample, LwIt indicates to correspond to The Laplacian Matrix of intrinsic figure in regularization figure, X indicate training sample.
7. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: institute in step (7) The distinctive canonical correlation analysis formula stated is as follows:
Wherein,Indicate q-th of training sample XqWith r-th of training sample XrBetween relevance values, q=1,2 ..., p, r =1,2 ..., p, p indicate the number of visual angle characteristic, A indicates a matrix of the matrix-block as diagonal element for being 1 by element, The matrix-block is by m nc×ncMatrix composition, m indicate training sample classification sum, ncIndicate c class number of training Mesh, c=1,2 ..., m.
8. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: institute in step (8a) The multi-angle of view identification formula based on regularization figure stated is as follows:
Wherein, max expression is maximized operation,Indicate q-th of training sample XqProjection vector,Indicate r-th of instruction Practice sample XrProjection vector,Indicate q-th of training sample XqClass scatter matrix,Indicate r-th of training sample XrClass scatter matrix, γ indicates information and training sample between the class of balance training sample being randomly provided between [0,1] Between similitude parameter, s.t. indicate constraint condition,Indicate q-th of training sample XqGlobal Scatter Matrix,Table Show r-th of training sample XrGlobal Scatter Matrix.
9. the multi-angle of view recognition methods according to claim 1 based on regularization figure, it is characterised in that: institute in step (8b) The combinatorial formula for stating projection vector is as follows:
Wherein, Z indicates the visual angle characteristic after projection combination of two, XqIndicate q-th of training sample, XrIndicate r-th of trained sample This,Indicate q-th of training sample XqProjection vector,Indicate r-th of training sample XrProjection vector.
CN201710644457.XA 2017-08-01 2017-08-01 Multi-angle of view recognition methods based on regularization figure Active CN107423767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710644457.XA CN107423767B (en) 2017-08-01 2017-08-01 Multi-angle of view recognition methods based on regularization figure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710644457.XA CN107423767B (en) 2017-08-01 2017-08-01 Multi-angle of view recognition methods based on regularization figure

Publications (2)

Publication Number Publication Date
CN107423767A CN107423767A (en) 2017-12-01
CN107423767B true CN107423767B (en) 2019-11-15

Family

ID=60431691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710644457.XA Active CN107423767B (en) 2017-08-01 2017-08-01 Multi-angle of view recognition methods based on regularization figure

Country Status (1)

Country Link
CN (1) CN107423767B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533078B (en) * 2019-08-02 2022-03-22 西安电子科技大学 Multi-view recognition method based on dictionary pairs

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441716A (en) * 2008-11-27 2009-05-27 上海交通大学 Integral and local characteristic fuse recognition system facing to identification
CN106203339A (en) * 2016-07-11 2016-12-07 山东大学 A kind of based on the alignment of multiple coupled differentiation localized mass across angle gait recognition method
CN106934359A (en) * 2017-03-06 2017-07-07 重庆邮电大学 Various visual angles gait recognition method and system based on high order tensor sub-space learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014152929A1 (en) * 2013-03-14 2014-09-25 Arizona Board Of Regents, A Body Corporate Of The State Of Arizona For And On Behalf Of Arizona State University Measuring glomerular number from kidney mri images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441716A (en) * 2008-11-27 2009-05-27 上海交通大学 Integral and local characteristic fuse recognition system facing to identification
CN106203339A (en) * 2016-07-11 2016-12-07 山东大学 A kind of based on the alignment of multiple coupled differentiation localized mass across angle gait recognition method
CN106934359A (en) * 2017-03-06 2017-07-07 重庆邮电大学 Various visual angles gait recognition method and system based on high order tensor sub-space learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Global–local fisher discriminant approach for face recognition;Qianqian Wang 等;《Neural Comput & Applic》;20140506;第25卷;第1137-1144页 *
MiLDA:Agraphembeddingapproachtomulti-viewfacerecognition;Yiwen Guo et al;《Neurocomputing》;20150331;第151卷;第1255-1261页 *
Multiview Uncorrelated Discriminant Analysis;Shiliang Sun et al;《IEEE TRANSACTIONS ON CYBERNETICS》;20161231;第46卷(第12期);第3272-3284页 *
一种人脸图像特征提取的局部和整体间距嵌入方法;杜海顺 等;《计算机科学》;20120930;第39卷(第9期);第275-278页 *

Also Published As

Publication number Publication date
CN107423767A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
Jia et al. A novel ranking-based clustering approach for hyperspectral band selection
Yuan et al. Structured dictionary learning for abnormal event detection in crowded scenes
Lee et al. Adaboost for text detection in natural scene
CN107992891B (en) Multispectral remote sensing image change detection method based on spectral vector analysis
CN103198303B (en) A kind of gender identification method based on facial image
CN105320950A (en) A video human face living body detection method
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN105913025A (en) Deep learning face identification method based on multiple-characteristic fusion
CN106815601A (en) Hyperspectral image classification method based on recurrent neural network
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN105426889A (en) PCA mixed feature fusion based gas-liquid two-phase flow type identification method
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN105760900B (en) Hyperspectral image classification method based on neighbour's propagation clustering and sparse Multiple Kernel Learning
CN105678261B (en) Based on the direct-push Method of Data with Adding Windows for having supervision figure
CN106055653A (en) Video synopsis object retrieval method based on image semantic annotation
CN109241813B (en) Non-constrained face image dimension reduction method based on discrimination sparse preservation embedding
CN107563442A (en) Hyperspectral image classification method based on sparse low-rank regular graph qualified insertion
CN103617413B (en) Method for identifying object in image
CN104778482A (en) Hyperspectral image classifying method based on tensor semi-supervised scale cutting dimension reduction
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
CN107145831A (en) Based on vector probabilistic diffusion and markov random file Hyperspectral Image Classification method
CN103714340B (en) Self-adaptation feature extracting method based on image partitioning
CN107578063B (en) Image Spectral Clustering based on fast selecting landmark point
CN107194314A (en) The fuzzy 2DPCA and fuzzy 2DLDA of fusion face identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant