CN109389127A - Structuring multiple view Hessian regularization sparse features selection method - Google Patents

Structuring multiple view Hessian regularization sparse features selection method Download PDF

Info

Publication number
CN109389127A
CN109389127A CN201710693735.0A CN201710693735A CN109389127A CN 109389127 A CN109389127 A CN 109389127A CN 201710693735 A CN201710693735 A CN 201710693735A CN 109389127 A CN109389127 A CN 109389127A
Authority
CN
China
Prior art keywords
view
feature
matrix
formula
hessian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710693735.0A
Other languages
Chinese (zh)
Other versions
CN109389127B (en
Inventor
史彩娟
段昌钰
赵丽莉
刘利平
葛超
刘健
闫晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Science and Technology
Original Assignee
North China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Science and Technology filed Critical North China University of Science and Technology
Priority to CN201710693735.0A priority Critical patent/CN109389127B/en
Publication of CN109389127A publication Critical patent/CN109389127A/en
Application granted granted Critical
Publication of CN109389127B publication Critical patent/CN109389127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/478Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of structuring multiple view Hessian regularization sparse features selection methods, the following steps are included: the bottom visual signature of n original image of acquisition, obtain m view image eigenmatrix, if the feature selecting mapping matrix of X is variable G, construct the objective function of structuring multiple view Hessian regularization sparse features selection, the feature selecting mapping matrix G that X is calculated by iterative algorithm, according to gained feature selecting mapping matrix Gt, willDescending arrangement is carried out, ds before choosingFeature corresponding to X is as the character subset after feature selecting.The present invention is when carrying out semi-supervised feature selecting to multiple view data, not only consider the importance of each view, the importance of different characteristic under same view is also considered simultaneously, in addition, the performance of semi-supervised sparse features selection is further improved using multiple view Hessian regularization, therefore, the present invention has better feature selecting performance.

Description

Structuring multiple view Hessian regularization sparse features selection method
Technical field
The invention belongs to semi-supervised sparse features selection technique fields, relate in particular to a kind of structuring multiple view Hessian regularization sparse features selection method.
Background technique
In order to better understand, search for and image data of classifying, many visual signatures are suggested, such as shape feature, face Color characteristic, textural characteristics etc..The feature of each type is all that image data is described from a certain particular space, and has Specific physical significance and statistical property.For traditionally, the feature of each type can be counted as a view, therefore by The data of different types of character representation are referred to as multiple view data.The effective information for how obtaining multiple view data, which becomes, works as One research hotspot of preceding feature selecting analysis field.
A kind of most straightforward approach is exactly that multiple view data are directly concatenated into a long feature vector, this method letter It is single, but this mode directly concatenated destroys the potential association between different views feature significantly, while also lacking physical solution It releases.
In order to overcome the shortcomings of directly to concatenate multiple view data, in recent years, multiple view study is widely studied, and is applied to Among feature selecting analysis.These methods can be effectively using the complementarity and relevance between different views feature, together When think that all feature importance having the same under same view, all features under same view are endowed identical power Weight.However in fact, the different characteristic under same view has different importance.
Therefore, it not only needs to consider the importance of each view when feature selecting, while needing to consider same The importance of different characteristic under view, this further can select performance by lifting feature.Recently, there is a few thing to this progress It attempts, Wang et al. proposes a group l1Norm (G1Norm), one is proposed based on the sparse regularization of co-ordinative constructionization based on this The sparse Multimodal Learning method of supervision carry out heterogeneous feature integration, it is also proposed that non-supervisory more views of all features of integration Figure learning model.
Ability is preferably inferred since Hessian regularization has than figure Laplace regularization, Hessian is just Then changing has better semi-supervised learning ability.In recent years, the semi-supervised feature selection approach based on Hessian regularization is mentioned Out, however, these semi-supervised feature selection approach do not have during constructing Hessian regularization when facing multiple view data There are fine consideration multiple view data characteristics, has ignored the association between different views feature and complementary characteristic.
Summary of the invention
In view of the deficiencies of the prior art, the object of the present invention is to provide a kind of structuring multiple view Hessian regularization is dilute Dredge feature selection approach.
For this purpose, technical scheme is as follows:
A kind of structuring multiple view Hessian regularization sparse features selection method, comprising the following steps:
1) the bottom visual signature for acquiring n original image, obtains m view image eigenmatrix, wherein
The m view image eigenmatrix are as follows:
In the formula (1), the dvFor v-th of view image characteristic dimension;The XvIt is special for v-th of view image Matrix is levied, and
In the formula (2), x1 v,x2 v…,xl vThere is label figure for l under v-th of view in the n original image The feature vector of picture, xl+1 v,…,xn vFor the features without label image of n-l under v-th of view in the n original image to Amount;
In the step 1), the bottom visual signature includes: color correlogram, wavelet texture and edge direction histogram Figure.
2) the feature selecting mapping matrix of the step 1) X is set as variable G, and:
In the formula (3), the c is the classification number of the label of the n original image;
Construct the objective function of structuring multiple view Hessian regularization sparse features selection:
In the formula (4):
For the sparse restriction of structuring multiple view, wherein λ and γ is regularization coefficient, the λ Value range with γ is [10-5, 105];For the G of the G1Norm, For the l of the G2,1/2Matrix norm, gi'=[g1 i' … gc i']∈R1×c, 1≤i'≤d,
H is multiple view Hessian,
In formula (4) and (5), HvFor v-th of view Hessian;Variable F is the pre- mark of the n original image Matrix is signed,For multiple view Hessian regularization;ηvIt is v-th in multiple view Hessian regularization The weight of view Hessian;The ε is ηvIndex, ε > 1;
For loss function;
μ is regularization coefficient, and the value range of the μ is [10-5, 105];
Y=[y1,y2…,yl,yl+1,…,yn]T∈{0,1}n×cFor the label matrix of the n original image;
Diagonal matrix U ∈ Rn×nFor the decision rule matrix determined according to the X, U=(Ui”i”)n×n, 1≤i "≤n, when 1 When≤i "≤l, diagonal element Ui”i”=∞, as l < i "≤n, diagonal element Ui”i”=1;
In the step 2), FvFor the prediction label matrix of v-th of view, Fv∈Rn×c
3) the feature selecting mapping matrix G that the X is calculated by iterative algorithm, if GsFor the value of the G in the secondary iteration, ηvsFor the η in the secondary iterationvValue, s=1,2 ... ..., t-1;If G1For random matrix and ηv1=1/m, by G1And ηv1As Initial value substitutes into the iterative algorithm and is iterated calculating, until after the t-1 times iteration, corresponding to the t-1 times iteration The value of objective function is poor less than 10 with the value of objective function corresponding to the t-2 times iteration-3When, iteration is completed;At this point, according to (gi)tDetermine feature selecting mapping matrix GtThe feature selecting mapping matrix G of the as described X;Wherein,
The calculating process of each iterative algorithm are as follows:
By ηvsThe formula (5) are substituting to, the value H of the H in the secondary iteration is obtaineds,
By GsAnd HsIt substitutes into corresponding formula (6) and P is calculated in (7)sAnd Qs
Ps=(Hs+U+μI)-1 (6)
Qs=UY+ μ XTGs (7)
F is calculated according to formula (8), (9) and (10)s、AsAnd Bs, FsFor the value of the F in the secondary iteration;
Fs=PsQs (8)
As=X (μ I- μ2Ps T)XT (9)
Bs=μ XPsUY (10)
Wherein, the I described in formula (6) is unit matrix;
It is (w that diagonal element, which is calculated, according to formula (11) and (12)i'i')sDiagonal matrix WsWith with j diagonal blocks Block diagonal matrix (Di)s(1≤i≤c);
Wherein,
In the formula (12), IjIt is the unit matrix that dimension is j*j;
By As、Bs、Ws(Di)sThe formula (13) are substituted into, (g is obtainedi)s+1
(gi)s+1=(As+4λWs+γ(Di)s)-1Bs,1≤i≤c (13)
It calculates
4) according to feature selecting mapping matrix G obtained by step 3)t, will1≤i'≤d carries out descending arrangement, chooses Preceding dsFeature corresponding to the X is as the character subset after feature selecting, whereinFor G obtained by step 3)t Corresponding gi′
Compared with the prior art, the invention proposes a kind of selections of structuring multiple view Hessian regularization sparse features Method (Structured Multi-view Hessian sparse Feature Selection, SMHFS) faces multiple view When data, this method, which is based on the sparse regularization of structuring multiple view, to be made not only to consider each during semi-supervised feature selecting The importance of view, while considering the importance of different characteristic under same view.In addition, SMHFS is utilizing multiple view Hessian just Then change and further improves the performance of semi-supervised sparse features selection.Experimental evaluation is the result shows that SMHFS is better than existing feature Selection algorithm has better feature selecting performance.
Detailed description of the invention
Fig. 1 (a) is Average Accuracy mean value (Mean Average Precision, MAP) property on NUS-WIDE database The curve that can change with the variation for having label data percentage;Fig. 1 (b) be on MSRA-MM database MAP performance with having The variation of label data percentage and the curve changed;
Fig. 2 (a) is by proposing method SMHFS and method MLSFS performance with selection number of features on NUS-WIDE database The variation of ds and the curve changed;Fig. 2 (b) by mentioned on MSRA-MM database method SMHFS and method MLSFS performance with The curve for selecting the variation of number of features ds and changing;
Fig. 3 (a) is by proposing method SMHFS and method SMML and SFUS performance with selection feature on NUS-WIDE database The variation of number ds and the curve changed;Fig. 3 (b) is by mentioning method SMHFS and method SMML and SFUS on MSRA-MM database The curve that performance changes with the variation of selection number of features ds.
Specific embodiment
Structuring multiple view Hessian regularization sparse features selection method of the invention is carried out with reference to the accompanying drawing detailed It describes in detail bright, comprising the following steps:
1) the bottom visual signature for acquiring n original image, obtains m view image eigenmatrix, wherein
The m view image eigenmatrix are as follows:
In the formula (1), the dvFor v-th of view image characteristic dimension;The XvIt is special for v-th of view image Matrix is levied, and
In the formula (2), x1 v,x2 v…,xl vThere is label image for l under v-th of view in the n original image Feature vector, xl+1 v,…,xn vFor n-l under v-th of view in the n original image feature vectors without label image;
In the step 1), the bottom visual signature includes: color correlogram, wavelet texture and edge direction histogram Figure extracts these three bottom visual signatures, i.e., the wavelet texture of the color correlogram of 144 dimensions, 128 dimensions to each image pattern The edge orientation histogram that (MSRA-MM database) is tieed up with 73 dimensions (NUS-WIDE database) or 75, therefore, in two images The dimension d of the multi-view image feature set obtained on database is respectively 345 dimensions or 347 dimensions.
2) the feature selecting mapping matrix of the step 1) X is set as variable G, and:
In the formula (3), the c is the classification number of the label of the n original image;
Construct the objective function of structuring multiple view Hessian regularization sparse features selection:
In the formula (4):
(it is located at ‖ G ‖ for sparse limit of structuring multiple view2,1/2The 1/2 of the upper right corner is index), it should The sparse restriction of structuring multiple view can guarantee the importance that each view is not only considered during feature selecting, simultaneously The importance of different characteristic in each view is considered, to reach good feature selecting performance.
Wherein, λ and γ is regularization coefficient, and the value range of the λ and γ are [10-5, 105]; For the G of the G1Norm,For the l of the G2, 1/2 matrix norm, gi'=[g1 i' … gc i']∈ R1×c, 1≤i'≤d, selection l2,1/2Matrix norm can guarantee the feature most identification chosen;
H is multiple view Hessian,
In corresponding formula (4) and (5), HvFor v-th of view Hessian;Variable F is the n original image Prediction label matrix,For multiple view Hessian regularization, it is contemplated that between different views feature Association and complementary characteristic, while there is better semi-supervised learning ability.ηvIt is v-th in multiple view Hessian regularization The weight of view Hessian;The ε is ηvIndex, ε > 1;Parameter ε is introduced, guarantees that each view selects final feature Specific contribution is selected.It is special that multiple view Hessian regularization can make full use of the complementary information between different views to be promoted Sign selection performance, while Hessian regularization has preferably deduction ability than figure Laplace regularization, to have more preferable Semi-supervised learning ability.
For loss function;
μ is regularization coefficient, and the value range of the μ is [10-5, 105];
Y=[y1,y2…,yl,yl+1,…,yn]T∈{0,1}n×cFor the label matrix of the n original image;
Diagonal matrix U ∈ Rn×nFor the decision rule matrix determined according to the X, U=(Ui”i”)n×n, 1≤i "≤n, when 1 When≤i "≤l, diagonal element Ui”i”=∞, as l < i "≤n, diagonal element Ui”i”=1;Guarantee the label matrix of prediction F is consistent with existing label matrix Y.(method of determining decision rule matrix is shown in: " Z.G.Ma, F.P.Nie, Y.Yang, J.Uijlings,N.Sebe,and A.Hauptmann,“Discriminating joint feature analysis for multimedia data understanding,”IEEE Trans.Multimedia,vol.14,no.6,pp.1662– 1672,Dec.2012.”)
In the step 2), FvFor the prediction label matrix of v-th of view, Fv∈Rn×c
3) the feature selecting mapping matrix G that the X is calculated by iterative algorithm, if GsFor the value of the G in the secondary iteration, ηvsFor the η in the secondary iterationvValue, s is the number of iterations, s=1,2 ... ..., t-1;If G1For random matrix and ηv1=1/m, By G1And ηv1It is substituted into the iterative algorithm as initial value and is iterated calculating, until this is the t-1 times after the t-1 times iteration The value of objective function corresponding to iteration is poor less than 10 with the value of objective function corresponding to the t-2 times iteration-3When, iteration is complete At;At this point, according to (gi)tDetermine feature selecting mapping matrix GtThe feature selecting mapping matrix G of the as described X;Wherein,
The calculating process of each iterative algorithm are as follows:
By ηvsThe formula (5) are substituting to, the value H of the H in the secondary iteration is obtaineds,
By GsAnd HsIt substitutes into corresponding formula (6) and P is calculated in (7)sAnd Qs(PsAnd QsThe respectively P in the secondary iteration With the value of Q);
Ps=(Hs+U+μI)-1 (6)
Qs=UY+ μ XTGs (7)
F is calculated according to formula (8), (9) and (10)s、AsAnd Bs, FsFor the value of the F in the secondary iteration;
Fs=PsQs (8)
As=X (μ I- μ2Ps T)XT (9)
Bs=μ XPsUY (10)
Wherein, AsAnd BsRespectively in the secondary iteration A and B value, in the formula (6) I be unit matrix;
It is (w that diagonal element, which is calculated, according to formula (11) and (12)i'i')sDiagonal matrix WsWith with j diagonal blocks Block diagonal matrix (Di)s(1≤i≤c);
Wherein,
In the formula (12), IjIt is the unit matrix that dimension is j*j;
By As、Bs、Ws(Di)sThe formula (13) are substituted into, (g is obtainedi)s+1
(gi)s+1=(As+4λWs+γ(Di)s)-1Bs,1≤i≤c (13)
It calculates
It 4), will according to feature selecting mapping matrix Gt obtained by step 3)1≤i'≤d carries out descending arrangement, chooses Preceding dsFeature corresponding to the X is as the character subset after feature selecting, whereinFor G obtained by step 3)t Corresponding gi′, ds is characterized selection number.(quotation " Z.G.Ma, F.P.Nie, Y.Yang, J.Uijlings, N.Sebe, and A.Hauptmann,“Discriminating joint feature analysis for multimedia data understanding,”IEEE Trans.Multimedia,vol.14,no.6,pp.1662–1672,Dec.2012.”)
The property of semi-supervised sparse features selection is carried out to verify the mentioned SMHFS algorithm of the present invention in face of multiple view data Can, by it and other 4 kinds semi-supervised sparse features selection algorithm SMBLR (Sparse Multinomial Logistic Regression via Bayesian L1Regularization, Bayes l1Regularization sparse polynomial logistic regression is calculated Method), FSNM (Feature Selection via Joint l2,1- Norms Minimization combines l2,1Norm minimum Feature selecting algorithm), FSLG (Feature Selection based on Graph Laplacian, based on figure Laplce Semi-supervised sparse features selection algorithm), MLSFS (Multi-view Laplacian Sparse Feature Selection, The semi-supervised feature selecting algorithm of multiple view Laplce) and 2 kinds of supervision feature selecting algorithm SFUS (Sub-Feature Uncovering with Sparsity, the subcharacter discovery based on sparsity) and SMML (Sparse Multi-Modal Learning, sparse Multimodal Learning) test and has compared.Wherein semi-supervised feature selecting algorithm SMBLR, FSNM, FSLG It is directly to concatenate multiple view data to be when carrying out feature selecting to multiple view data with supervision feature selecting algorithm SFUS One long feature vector;Semi-supervised feature selection approach MLSFS is considered not when carrying out feature selecting to multiple view data With the complementary characteristic between view, each view is considered as a whole;Feature selection approach SMML is supervised to multiple view The importance of different views and the importance of different characteristic under same view are considered when data carry out feature selecting simultaneously.
Experiment carries out on two image data bases NUS-WIDE and MSRA-MM, using three kinds of bottom multiple view features, i.e., Color correlogram, wavelet texture and edge orientation histogram.For semi-supervised feature selection approach, have shared by label data Percentage is respectively 5%, 10%, 20%, 50%.In order to preferably assess the performance of feature selecting algorithm, 5 kinds of assessments are used Criterion: bat mean value MAP, recall rate Recall, accurate rate Precision, MicroAUC and MacroAUC.In order to test Characteristics of syndrome selects influence of the number to performance, and feature selecting number ds is respectively set as 100,150,200,250,300 and whole. Regularization parameter μ, λ and γ are set separately in experiment are as follows: μ=10 on NUS-WIDE database, λ=100000, γ=1; μ=1000 on MSRA-MM database, λ=100000, γ=1.Rule of thumb, parameter ε is set as 1010.Each experiment is equal It does 10 times, takes average result as the final result of experiment.
Mentioned algorithm SMHFS and other 4 kinds semi-supervised sparse features algorithms are compared in experiment, comparison result is such as Shown in Fig. 1, Tables 1 and 2.From comparison result it follows that no matter having how the percentage of label data changes, mentioned algorithm SMHFS has best performance.
1 performance of table compares (NUS-WIDE database)
2 performance of table compares (MSRA-MM database)
In addition, SMHFS is compared with other 2 kinds of supervision feature selecting algorithm in experiment, set in SMHFS has at this time Label data percentage is 100%, and the results are shown in Table 3.The experimental results showed that mentioned algorithm SMHFS is with best Feature selecting performance.
3 performance of table compares
In summary, when facing multiple view data, mentioned algorithm SMHFS has better feature selecting performance, mainly returns Because in following three points: firstly, SMHFS carries out feature choosing by the building sparse regularization of structuring multiple view, to multiple view data The importance of different views and the importance of different characteristic under same view can be considered when selecting simultaneously;Secondly, more by constructing View Hessian regularization considers the complementarity between different views, further improves the performance of semi-supervised feature selecting;Most Afterwards, since SMHFS algorithm is based on l2,1/2Matrix norm, it can guarantee the feature most identification chosen.
In order to verify influence of the selection number of features ds to performance, ds is respectively set as 100,150,200,250,300 It is tested with whole.In addition also SMHFS algorithm and semi-supervised feature selection approach MLSFS are compared respectively, it will SMHFS algorithm is compared with supervision feature selection approach SFUS and SMML.Experimental result is as shown in Figures 2 and 3, You Tuke To obtain as drawn a conclusion:
1) when the number of features ds selected is 250, SMHFS has best performance;
2) for SMHFS algorithm compared with other algorithms, the performance of SMHFS is better than other algorithms.
Experimental result illustrates to mention again based on the sparse regularization of structuring multiple view and multiple view Hessian regularization SMHFS algorithm can preferably carry out feature selecting when facing multiple view data, obtain good feature selecting performance.
The present invention constructs the sparse regularization of structuring multiple view, guarantees to consider simultaneously in feature selection process each The importance of different characteristic under the importance of a view and same view, to obtain better feature selecting performance.Meanwhile This method constructs multiple view Hessian regularization, can be good at using the complementarity between different views feature, in addition, phase Than there is better semi-supervised learning performance in figure Laplace regularization.Therefore, when facing multiple view data, SMHFS can Preferably realize semi-supervised sparse features selection.The experimental results showed that SMHFS method is better than traditional single view feature selecting Method, the feature selection approach directly concatenated better than multiple view and relevant latest features selection method.
Illustrative description has been done to the present invention above, it should explanation, the case where not departing from core of the invention Under, any simple deformation, modification or other skilled in the art can not spend the equivalent replacement of creative work equal Fall into protection scope of the present invention.

Claims (5)

1. a kind of structuring multiple view Hessian regularization sparse features selection method, which comprises the following steps:
1) the bottom visual signature for acquiring n original image, obtains m view image eigenmatrix, wherein
The m view image eigenmatrix are as follows:
In the formula (1), the dvFor v-th of view image characteristic dimension;The XvFor v-th of view image feature square Battle array, and
In the formula (2), x1 v,x2 v…,xl vThere is label image for l under v-th of view in the n original image Feature vector, xl+1 v,…,xn vFor n-l under v-th of view in the n original image feature vectors without label image;
2) the feature selecting mapping matrix of the step 1) X is set as variable G, and:
In the formula (3), the c is the classification number of the label of the n original image;
Construct the objective function of structuring multiple view Hessian regularization sparse features selection:
In the formula (4):
For the sparse restriction of structuring multiple view, wherein λ and γ is regularization coefficient; For the G of the G1Norm,For the l of the G2,1/2Matrix norm, gi'=[g1 i' … gc i']∈ R1×c, 1≤i'≤d,
H is multiple view Hessian,
In formula (4) and (5), HvFor v-th of view Hessian;Variable F is the prediction label square of the n original image Battle array,For multiple view Hessian regularization;ηvFor v-th of view in multiple view Hessian regularization Scheme the weight of Hessian;The ε is ηvIndex, ε > 1;
For loss function;
μ is regularization coefficient;
Y=[y1,y2…,yl,yl+1,…,yn]T∈{0,1}n×cFor the label matrix of the n original image;
Diagonal matrix U ∈ Rn×nFor the decision rule matrix determined according to the X, U=(Ui”i”)n×n, 1≤i "≤n, as 1≤i " When≤l, diagonal element Ui”i”=∞, as l < i "≤n, diagonal element Ui”i”=1;
3) the feature selecting mapping matrix G that the X is calculated by iterative algorithm, if GsFor the value of the G in the secondary iteration, ηvsFor The η in the secondary iterationvValue, s=1,2 ... ..., t-1;If G1For random matrix and ηv1=1/m, by G1And ηv1As initial Value substitutes into the iterative algorithm and is iterated calculating, until after the t-1 times iteration, target corresponding to the t-1 times iteration The value of function is poor less than 10 with the value of objective function corresponding to the t-2 times iteration-3When, iteration is completed;At this point, according to (gi)tDetermine feature selecting mapping matrix GtThe feature selecting mapping matrix G of the as described X;Wherein,
The calculating process of each iterative algorithm are as follows:
By ηvsThe formula (5) are substituting to, the value H of the H in the secondary iteration is obtaineds,
By GsAnd HsIt substitutes into corresponding formula (6) and P is calculated in (7)sAnd Qs
Ps=(Hs+U+μI)-1 (6)
Qs=UY+ μ XTGs (7)
F is calculated according to formula (8), (9) and (10)s、AsAnd Bs, FsFor the value of the F in the secondary iteration;
Fs=PsQs (8)
As=X (μ I- μ2Ps T)XT (9)
Bs=μ XPsUY (10)
Wherein, the I described in formula (6) is unit matrix;
It is (w that diagonal element, which is calculated, according to formula (11) and (12)i'i')sDiagonal matrix WsWith the block with j diagonal blocks Diagonal matrix (Di)s(1≤i≤c);
Wherein,
In the formula (12), IjIt is the unit matrix that dimension is j*j;
By As、Bs、Ws(Di)sThe formula (13) are substituted into, (g is obtainedi)s+1
(gi)s+1=(As+4λWs+γ(Di)s)-1Bs,1≤i≤c (13)
It calculates
4) according to feature selecting mapping matrix G obtained by step 3)t, willDescending arrangement is carried out, before selection DsFeature corresponding to the X is as the character subset after feature selecting, whereinFor step
3) gained GtCorresponding gi′
2. structuring multiple view Hessian regularization sparse features selection method according to claim 1, feature exist In in the step 1), the bottom visual signature includes: color correlogram, wavelet texture and edge orientation histogram.
3. structuring multiple view Hessian regularization sparse features selection method according to claim 1, feature exist In the value range of the λ and γ are [10-5, 105]。
4. structuring multiple view Hessian regularization sparse features selection method according to claim 1, feature exist In the value range of the μ is [10-5, 105]。
5. structuring multiple view Hessian regularization sparse features selection method according to claim 1, feature exist In, in the step 2), FvFor the prediction label matrix of v-th of view, Fv∈Rn×c
CN201710693735.0A 2017-08-14 2017-08-14 Structured multi-view Hessian regularization sparse feature selection method Active CN109389127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710693735.0A CN109389127B (en) 2017-08-14 2017-08-14 Structured multi-view Hessian regularization sparse feature selection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710693735.0A CN109389127B (en) 2017-08-14 2017-08-14 Structured multi-view Hessian regularization sparse feature selection method

Publications (2)

Publication Number Publication Date
CN109389127A true CN109389127A (en) 2019-02-26
CN109389127B CN109389127B (en) 2021-05-07

Family

ID=65416463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710693735.0A Active CN109389127B (en) 2017-08-14 2017-08-14 Structured multi-view Hessian regularization sparse feature selection method

Country Status (1)

Country Link
CN (1) CN109389127B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188825A (en) * 2019-05-31 2019-08-30 山东师范大学 Image clustering method, system, equipment and medium based on discrete multiple view cluster
CN111783816A (en) * 2020-02-27 2020-10-16 北京沃东天骏信息技术有限公司 Feature selection method and device, multimedia and network data dimension reduction method and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699874A (en) * 2013-10-28 2014-04-02 中国计量学院 Crowd abnormal behavior identification method based on SURF (Speed-Up Robust Feature) stream and LLE (Locally Linear Embedding) sparse representation
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN105740917A (en) * 2016-03-21 2016-07-06 哈尔滨工业大学 High-resolution remote sensing image semi-supervised multi-view feature selection method with tag learning
US20170046614A1 (en) * 2015-08-11 2017-02-16 Oracle International Corporation Accelerated tr-l-bfgs algorithm for neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699874A (en) * 2013-10-28 2014-04-02 中国计量学院 Crowd abnormal behavior identification method based on SURF (Speed-Up Robust Feature) stream and LLE (Locally Linear Embedding) sparse representation
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
US20170046614A1 (en) * 2015-08-11 2017-02-16 Oracle International Corporation Accelerated tr-l-bfgs algorithm for neural network
CN105740917A (en) * 2016-03-21 2016-07-06 哈尔滨工业大学 High-resolution remote sensing image semi-supervised multi-view feature selection method with tag learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188825A (en) * 2019-05-31 2019-08-30 山东师范大学 Image clustering method, system, equipment and medium based on discrete multiple view cluster
CN110188825B (en) * 2019-05-31 2020-01-31 山东师范大学 Image clustering method, system, device and medium based on discrete multi-view clustering
CN111783816A (en) * 2020-02-27 2020-10-16 北京沃东天骏信息技术有限公司 Feature selection method and device, multimedia and network data dimension reduction method and equipment

Also Published As

Publication number Publication date
CN109389127B (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN109241317B (en) Pedestrian Hash retrieval method based on measurement loss in deep learning network
Lisanti et al. Group re-identification via unsupervised transfer of sparse features encoding
Forero et al. Robust clustering using outlier-sparsity regularization
Hu et al. Efficient 3-d scene analysis from streaming data
CN110738647B (en) Mouse detection method integrating multi-receptive-field feature mapping and Gaussian probability model
CN114730490A (en) System and method for virtual reality and augmented reality
Hu et al. Delving into deep representations for remote sensing image retrieval
CN110598061A (en) Multi-element graph fused heterogeneous information network embedding method
CN111125397B (en) Cloth image retrieval method based on convolutional neural network
CN103020321B (en) Neighbor search method and system
Jouili et al. Median graph shift: A new clustering algorithm for graph domain
Chebbout et al. Comparative study of clustering based colour image segmentation techniques
Srivastava et al. Deeppoint3d: Learning discriminative local descriptors using deep metric learning on 3d point clouds
CN109389127A (en) Structuring multiple view Hessian regularization sparse features selection method
CN112651317A (en) Hyperspectral image classification method and system for sample relation learning
CN110188864B (en) Small sample learning method based on distribution representation and distribution measurement
Lu et al. Spectral segmentation via midlevel cues integrating geodesic and intensity
Prince et al. Bayesian identity clustering
CN112329818A (en) Hyperspectral image unsupervised classification method based on graph convolution network embedded representation
CN109241628B (en) Three-dimensional CAD model segmentation method based on graph theory and clustering
JP6220737B2 (en) Subject area extraction apparatus, method, and program
CN109948421B (en) Hyperspectral image classification method based on PCA and attribute configuration file
CN112837299A (en) Textile image fingerprint retrieval method
Joshi et al. Image similarity: A genetic algorithm based approach
Tuan et al. ColorRL: reinforced coloring for end-to-end instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant