CN104951651A - Non-negative image data dimension reduction method based on Hessian regular constraint and A optimization - Google Patents

Non-negative image data dimension reduction method based on Hessian regular constraint and A optimization Download PDF

Info

Publication number
CN104951651A
CN104951651A CN201510293897.6A CN201510293897A CN104951651A CN 104951651 A CN104951651 A CN 104951651A CN 201510293897 A CN201510293897 A CN 201510293897A CN 104951651 A CN104951651 A CN 104951651A
Authority
CN
China
Prior art keywords
matrix
column element
intermediary
row
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510293897.6A
Other languages
Chinese (zh)
Other versions
CN104951651B (en
Inventor
刘海风
杨根茂
杨政
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510293897.6A priority Critical patent/CN104951651B/en
Publication of CN104951651A publication Critical patent/CN104951651A/en
Application granted granted Critical
Publication of CN104951651B publication Critical patent/CN104951651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention discloses a non-negative image data dimension reduction method based on Hessian regular constraint and A optimization. The method includes the steps that step1, a sample feature matrix is constructed; step2, a Hessian regular matrix is calculated; step3, a basis matrix and a coefficient matrix are iteratively output so as be analyzed in a clustered mode. An A optimization regular item and a Hessian regular item are added into a target function so that a data expression obtained through decomposition can keep inherent features included in the manifold-shaped structure of original data while guaranteeing a small prediction error; through dimension reduction, redundancy information in the high-dimension data is removed, the low-dimension expression capable of accurately expressing the semantic structure of the data is extracted, and therefore clustering analysis performed on the high-dimension data becomes simpler and more effective.

Description

A kind of non-negative view data dimension reduction method optimized with A based on the constraint of Hessen canonical
Technical field
The invention belongs to image data processing technology field, be specifically related to a kind of non-negative view data dimension reduction method optimized with A based on the constraint of Hessen canonical.
Background technology
At image clustering, image recognition and the image processing field such as contextual model identification based on image, data scale presents explosive growth in the recent period, and these pending view data often all have the feature of very higher-dimension.These magnanimity high dimensional feature data, bring the challenge of many image Storage and Processing aspects.Fortunately, researchist to find that the inherent dimension of these view data compares its original dimension much lower.The way that the original higher-dimension that the data of this use low-dimensional carry out alternate data represents, is called as dimensionality reduction.For many data analysing methods, utilize dimensionality reduction can reduce the constraint of dimension disaster, the computation complexity of effective reduction analytical approach itself, even can also promote the effect of certain methods, can improve identification thus improve cluster analysis result as data after dimensionality reduction.In the last few years, researcher proposes the Data Dimensionality Reduction technology of many classics, as resolution of vectors (Vector Quantization, VQ), principal component analysis (PCA) (Principal Component Analysis, PCA), linear discriminant analysis (Linear Discriminant Analysis, LDA), independent component analysis (Independent Component Analysis, ICA), sparse coding (Sparse Coding), local retaining projection (Local Preserving Projection, and Non-negative Matrix Factorization (Non-negative Matrix Factorization LPP), NMF) etc.
In all these methods, Non-negative Matrix Factorization is the comparatively frequent basic skills used.The basic step of Non-negative Matrix Factorization raw data matrix is decomposed into two factor matrixs, and the product decomposing gained factor matrix can approximate representation raw data effectively.One of them factor matrix of Non-negative Matrix Factorization gained can regard one group of base of raw data as, often organizes the inherence semanteme that base vector all contains some data; Another factor matrix then regards matrix of coefficients as, and state raw data and often organize contacting of base vector, it is equivalent to the new expression of raw data under lower dimensional space.In addition, Non-negative Matrix Factorization also requires that raw data matrix is non-negative (namely each element of matrix is non-negative), and decomposes the basis matrix of gained and matrix of coefficients is all non-negative.This representation based on base vector combination has semantic interpretation very intuitively, it reflects the concept of " local forms entirety " in human thinking.In real world applications, text matrix, image array etc. are originally exactly non-negative, and the quantity of the base found after Non-negative Matrix Factorization usually will much smaller than the original dimension of data.Therefore, Non-negative Matrix Factorization can packed data size effectively, for other data Learning Schemes as cluster, classification etc. provide convenient.
In recent years, researchist is constantly had to propose follow-on non-negative matrix factorization method, as concept separating (Concept Factorization, CF), figure canonical Non-negative Matrix Factorization (Graph regularized Non-negative Matrix Factorization, and the Non-negative Matrix Factorization of belt restraining (Constrained Nonnegative Matrix Factorization, CNMF) etc. GNMF).Although these improve one's methods respective for subproblem on all have breakthrough, or do not consider to use linear regression model (LRM) to retrain the data representation after dimensionality reduction, or do not take the inherent geometrical property comprised in the manifold structure of raw image data into account.Therefore, the base that these non-negative matrix factorization method obtain may be very far away with raw data distance, uses this kind of base obviously also can not be optimum to carry out data representation.
Summary of the invention
For the above-mentioned technical matters existing for prior art, the invention provides a kind of non-negative view data dimension reduction method optimized with A based on the constraint of Hessen canonical, can effectively extract low-dimensional character representation, the low-dimensional of gained represents to have stronger discriminating power, obviously can improve the effect that subsequent image data is analyzed.
Based on the non-negative view data dimension reduction method that canonical constraint in Hessen is optimized with A, comprise the steps:
(1) obtain image pattern set, obtained the proper vector of each image pattern in set by image characteristics extraction, and then build the sample characteristics matrix X of described image pattern set;
(2) according to described sample characteristics matrix X, corresponding Hessen regular matrix is calculated based on Hessen energy principle
(3) according to described sample characteristics matrix X and Hessen regular matrix by optimizing the Non-negative Matrix Factorization iterative algorithm retrained with Hessen canonical based on A, solve basis matrix U and matrix of coefficients V, and make matrix of coefficients V as the low-dimensional character representation of view data.
Described Non-negative Matrix Factorization iterative algorithm is based on following iterative equation group:
U t = U ^ t - 1 ( S t - 1 ) - 1 2
V t = ( S t - 1 ) 1 2 V ^ t - 1
S t - 1 = [ diag ( ( U ^ t - 1 ) T U ^ t - 1 ) ]
u ^ ( j , k ) t = u ( j , k ) t - 1 a ( j , k ) t - 1 b ( j , k ) t - 1
v ^ ( k , i ) t = v ( k , i ) t - 1 c ( k , i ) t - 1 d ( k , i ) t - 1
p ( i , k ) t = p ( i , k ) t - 1 e ( i , k ) t - 1 f ( i , k ) t - 1
A t-1=X(V t-1) T
B t-1=U t-1[V t-1(V t-1) T]
E t-1=(V t-1) T
F t-1=P t-1V t-1(V t-1) T+βP t-1
g ^ ( k , i ) t - 1 = g ( k , i ) t - 1 , g ( k , i ) t - 1 &GreaterEqual; 0 0 , g ( k , i ) t - 1 < 0
q ^ ( k , l ) t - 1 = q ( k , l ) t - 1 , q ( k , l ) t - 1 &GreaterEqual; 0 0 , q ( k , l ) t - 1 < 0
G t-1=(P t-1) T
Q t-1=(P t-1) TP t-1
Wherein: U tand U t-1be respectively the basis matrix of the t time iteration and the t-1 time iteration, V tand V t-1be respectively the matrix of coefficients of the t time iteration and the t-1 time iteration, P tand P t-1be respectively the companion matrix of the t time iteration and the t-1 time iteration, [diag ()] represents the diagonal matrix set up by the diagonal entry of matrix in (), for intermediary matrix middle jth row kth column element, for basis matrix U t-1middle jth row kth column element, for intermediary matrix middle row k i-th column element, for matrix of coefficients V t-1middle row k i-th column element, with be respectively companion matrix P twith companion matrix P t-1in the i-th row kth column element, for intermediary matrix A t-1middle jth row kth column element, for intermediary matrix B t-1middle jth row kth column element, for intermediary matrix C t-1middle row k i-th column element, for intermediary matrix D t-1middle row k i-th column element, for intermediary matrix E t-1in the i-th row kth column element, for intermediary matrix F t-1in the i-th row kth column element, trepresenting matrix transposition, for intermediary matrix G t-1middle row k i-th column element, for intermediary matrix Q t-1middle row k l column element, for intermediary matrix middle row k i-th column element, for intermediary matrix middle row k i-th column element, for intermediary matrix middle row k l column element, for intermediary matrix middle row k l column element, t is iterations, i, j, k and l are natural number and 1≤i≤n, 1≤j≤m, 1≤k≤r, 1≤l≤r, n is the number of image pattern during namely the columns of sample characteristics matrix X is gathered, m is the line number of sample characteristics matrix X and the Characteristic Number of each image pattern, and r is the dimension after the line number of matrix of coefficients V and sample characteristics matrix X dimensionality reduction, and α, β and λ are default interative computation coefficient.
Described interative computation factor alpha, β and λ meet following relational expression:
λ=αβ
The stopping criterion for iteration of described Non-negative Matrix Factorization iterative algorithm is as follows:
O t - O t - 1 O t - 1 < &rho;
Wherein: Tr () represents matrix trace in (), and I is unit matrix, γ is interative computation coefficient and is practical experience value, O tand O t-1be respectively the object judgement matrix of the t time iteration and the t-1 time iteration, ρ is default convergence threshold.
Advantageous Effects of the present invention is as follows:
(1) data encoding of Corpus--based Method; Compared to traditional view data dimension reduction method based on Non-negative Matrix Factorization, the present invention is based on ridge regression model, in objective function, add A optimize regular terms, make to decompose the data representation obtained, no matter how its data mark value is selected, all obey a stable linear model, predicated error is less.
(2) the inherent geometry of raw image data is considered; Compared to traditional view data dimension reduction method based on Non-negative Matrix Factorization, the present invention, by adding Hessen regular terms in the objective function of Non-negative Matrix Factorization, makes to decompose the data representation obtained and can preserve the intrinsic characteristic comprised in the manifold structure of original view data.
(3) validity of high dimensional data is processed; Data analysis is carried out compared to directly using raw data, the present invention eliminates the redundant information in high dimensional data by dimensionality reduction, extract and accurately can represent that the low-dimensional of data semantic structure represents, made the data analysis such as cluster for high dimensional image become simple and effective.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet that the present invention is based on the non-negative view data dimension reduction method that Hessen canonical constraint is optimized with A.
Embodiment
In order to more specifically describe the present invention, below in conjunction with the drawings and the specific embodiments, technical scheme of the present invention is described in detail.
As shown in Figure 1, the present invention is based on the non-negative view data dimension reduction method that Hessen canonical constraint is optimized with A, comprise the steps:
(1) sample characteristics matrix is built.
Present embodiment is for MINIST handwriting digital data set, and the statistical information of this data acquisition is as shown in table 1:
Table 1
Data set Handwriting digital picture number Handwriting digital classification number Image pixel number
MINIST 4000 10 784
Wherein, MINIST data centralization has 4000 handwriting digital images, and 4000 handwriting digital images are made up of (each digital 400 images) 10 different handwriting digitals, directly choose the gray-scale value of image pixel as characteristics of image.
Choose MINIST data centralization two class example as original high dimensional data set (namely determining cluster number l=2), and build corresponding sample characteristics matrix X, X is that m × n ties up matrix, m is the Characteristic Number (i.e. the number of pixels of image) of sample, n is number of samples (i.e. number of image frames), n=2 × 400=800, m=784; Arbitrary element value then in sample characteristics matrix is the pixel value of correspondence image respective pixel.
(2) Hessen regular matrix is calculated.
For each sample point X in sample characteristics matrix i, calculate sample matrix middle distance sample point X iτ nearest sample point: and with these nearest samples points, construct neighbour's matrix
For each neighbour's matrix all there is a corresponding with it mapping function by sample point X iany Neighbor Points sequence number be mapped to the original subscript ω of this Neighbor Points in sample characteristics matrix.
To matrix carry out svd, and get wherein maximum w singular value characteristic of correspondence vector: structural matrix θ (i):
The matrix θ corresponding to each sample point (i), according to following formula compute matrix H (i):
H ( i ) = &Sigma; l = 1 w ( w + 1 ) 2 &theta; l ( i ) ( &theta; l ( i ) ) T
To each matrix θ (i), utilize mapping function compute matrix Y (i):
Hessen regular matrix is calculated according to following formula
Wherein: X is the sample characteristics matrix of m × n dimension, X iand for the column vector in X, for the matrix of τ × m dimension, θ (i)for the matrix of dimension; H (i)for τ × τ ties up matrix, for H (i)μ capable υ column element value; Y (i)for the matrix of n × n dimension, for Y (i)? oK column element value; for the Hessen regular matrix of n × n dimension; τ is set-point, represents sample point quantity when estimating tangent space; W is set-point, represents the dimension size of tangent space; I, τ, w, ω, μ and υ are natural number, and 1≤i≤n, 1≤τ≤n, 1≤w≤τ, 1≤ω≤n, 1≤μ≤τ, 1≤υ≤τ.
In present embodiment, by cross validation, parameters τ=4, w=3, r=l.
(3) iteration exports basis matrix and matrix of coefficients, for cluster analysis.
According to sample characteristics matrix X and Hessen regular matrix basis matrix U and matrix of coefficients V is gone out by following iterative equation group iterative:
U t = U ^ t - 1 ( S t - 1 ) - 1 2
V t = ( S t - 1 ) 1 2 V ^ t - 1
S t - 1 = [ diag ( ( U ^ t - 1 ) T U ^ t - 1 ) ]
u ^ ( j , k ) t - 1 = u ( j , k ) t - 1 a ( j , k ) t - 1 b ( j , k ) t - 1
v ^ ( k , i ) t - 1 = v ( k , i ) t - 1 c ( k , i ) t - 1 d ( k , i ) t - 1
p ( i , k ) t = p ( i , k ) t - 1 e ( i , k ) t - 1 f ( i , k ) t - 1
A t-1=X(V t-1) T
B t-1=U t-1V t-1(V t-1) T
g ` ( k , i ) t - 1 = - g ( k , i ) t - 1 , g ( k , i ) t - 1 < 0 0 , g ( k , i ) t - 1 &GreaterEqual; 0
q ` ( k , &kappa; ) t - 1 = - q ( k , &kappa; ) t - 1 , q ( k , &kappa; ) t - 1 < 0 0 , q ( k , &kappa; ) t - 1 &GreaterEqual; 0
G t-1=(P t-1) T
Q t-1=(P t-1) TP t-1
E t-1=(V t-1) T
F t-1=P t-1V t-1(V t-1) T+βP t-1
λ=αβ
O t - 1 - O t O t - 1 < &rho;
Wherein: X is sample characteristics matrix, U is the basis matrix of m × r dimension, and V is the matrix of coefficients of r × n dimension, for the Hessen regular matrix of n × n dimension; U t, V tand P tbe respectively the dimension of the m × r after t iteration basis matrix, r × n maintain matrix number and the auxiliary Iterative Matrix of n × r dimension, for U tthe i-th row kth column element value, for V trow k jth column element value, for P tjth row kth column element value; U t-1, V t-1and P t-1be respectively the dimension of the m × r after t-1 iteration basis matrix, r × n maintain matrix number and the auxiliary Iterative Matrix of n × r dimension, for U t-1jth row kth column element value, for V t-1row k i-th column element value, for P t-1the i-th row kth column element value; S t-1it is the matrix of r × r dimension; a t-1and B t-1for the matrix of m × r dimension, with be respectively a t-1and B t-1in jth row kth column element value; V t-1, V t-1and D t-1for the matrix of r × n dimension, with be respectively V t-1, C t-1and D t-1at the element value that row k i-th arranges; E t-1and F t-1for the matrix of n × r dimension, with be respectively E t-1and F t-1at the element value of the i-th row kth row; G t-1, with for the matrix of r × n dimension, with be respectively G t-1, with at the element value that row k i-th arranges; Q t-1, with for the matrix of r × r dimension, with be respectively Q t-1, with at the element value that row k κ arranges; it is matrix mark; R is set-point, represents the dimension size of internal representation in Non-negative Matrix Factorization; α, β and γ are interative computation coefficient and are practical experience value, and ρ is convergence threshold and is practical experience value; I, j, k, κ and r are natural number and 1≤i≤n, 1≤j≤m, 1≤k≤r, 1≤κ≤r.
In present embodiment, by cross validation, parameters α=100, β=0.01, γ=1000, r=l, ρ=10 -5.U 0, V 0and P 0be random initializtion matrix.
When iteration convergence (namely in iterative equation group last inequality set up) or reach maximum iteration time (in present embodiment, maximum iteration time is 500), then corresponding U tand V tbe respectively basis matrix U and the matrix of coefficients V of required solution.
Next coming in order make cluster number l=3,4,10, compare at Kmeans (non-dimensionality reduction), NMF (Non-negative Matrix Factorization), PCA (principal component analysis (PCA)), CF (concept separating), GNMF (figure canonical Non-negative Matrix Factorization) and present embodiment six kinds of pretreated Clustering Effects of matrix disassembling method by analytical precision (Accuracy) and normalised mutual information (Normalized Mutual Information, is abbreviated as NMI) two indices; Final achievement data result is as shown in table 2.
Degree of accuracy is used to the number percent of the data of measuring correct labeling:
Normalised mutual information is used to the measure information of the correlativity between tolerance two set, given two set C and C ':
MI ( C , C &prime; ) = &Sigma; c i &Element; C , c j &prime; &Element; C &prime; p ( c i , c j &prime; ) &CenterDot; log p ( c i , c j &prime; ) &rho; ( c i ) &CenterDot; p ( c j &prime; )
NMI ( C , C &prime; ) = MI ( C , C &prime; ) max ( H ( C ) , H ( C &prime; ) )
Wherein: p (c i), p (c ' j) represent and choose arbitrarily a certain data from data centralization, belong to c respectively i, c ' jprobability, p (c i, c ' j) then represent the probability simultaneously belonging to two classes; H (C) and H (C ') represents the entropy of C and C ' respectively.
Table 2
From table 2, present embodiment five kinds of matrix disassembling methods compared to existing technology, the effect of cluster and discriminating power can be significantly improved and improve.

Claims (4)

1., based on the non-negative view data dimension reduction method that canonical constraint in Hessen is optimized with A, comprise the steps:
(1) obtain image pattern set, obtained the proper vector of each image pattern in set by image characteristics extraction, and then build the sample characteristics matrix X of described image pattern set;
(2) according to described sample characteristics matrix X, corresponding Hessen regular matrix is calculated based on Hessen energy principle
(3) according to described sample characteristics matrix X and Hessen regular matrix by optimizing the Non-negative Matrix Factorization iterative algorithm retrained with Hessen canonical based on A, solve basis matrix U and matrix of coefficients V, and make matrix of coefficients V as the low-dimensional character representation of view data.
2. non-negative view data dimension reduction method according to claim 1, is characterized in that: described Non-negative Matrix Factorization iterative algorithm is based on following iterative equation group:
U t = U ^ t - 1 ( S t - 1 ) - 1 2
V t = ( S t - 1 ) 1 2 V ^ t - 1
S t - 1 = [ diag ( ( U ^ t - 1 ) T U ^ t - 1 ) ]
u ^ ( j , k ) t = u ( j , k ) t - 1 a ( j , k ) t - 1 b ( j , k ) t - 1
v ^ ( k , i ) t = v ( k , i ) t - 1 c ( k , i ) t - 1 d ( k , i ) t - 1
p ( i , k ) t = p ( i , k ) t - 1 e ( i , k ) t - 1 f ( i , k ) t - 1
A t-1=X(V t-1) T
B t-1=U t-1[V t-1(V t-1) T]
E t-1=(V t-1) T
F t-1=P t-1V t-1(V t-1) T+βP t-1
g ^ ( k , i ) t - 1 = g ( k , i ) t - 1 , g ( k , i ) t - 1 &GreaterEqual; 0 0 , g ( k , i ) t - 1 < 0
q ^ ( k , l ) t - 1 = q ( k , l ) t - 1 , q ( k , l ) t - 1 &GreaterEqual; 0 0 , q ( k , l ) t - 1 < 0
G t-1=(P t-1) T
Q t-1=(P t-1) TP t-1
Wherein: U tand U t-1be respectively the basis matrix of the t time iteration and the t-1 time iteration, V tand V t-1be respectively the matrix of coefficients of the t time iteration and the t-1 time iteration, P tand P t-1be respectively the companion matrix of the t time iteration and the t-1 time iteration, [diag ()] represents the diagonal matrix set up by the diagonal entry of matrix in (), for intermediary matrix middle jth row kth column element, for basis matrix U t-1middle jth row kth column element, for intermediary matrix middle row k i-th column element, for matrix of coefficients V t-1middle row k i-th column element, with be respectively companion matrix P twith companion matrix P t-1in the i-th row kth column element, for intermediary matrix A t-1middle jth row kth column element, for intermediary matrix B t-1middle jth row kth column element, for intermediary matrix C t-1middle row k i-th column element, for intermediary matrix D t-1middle row k i-th column element, for intermediary matrix E t-1in the i-th row kth column element, for intermediary matrix F t-1in the i-th row kth column element, T representing matrix transposition, for intermediary matrix G t-1middle row k i-th column element, for intermediary matrix Q t-1middle row k l column element, for intermediary matrix middle row k i-th column element, for intermediary matrix middle row k i-th column element, for intermediary matrix middle row k l column element, for intermediary matrix middle row k l column element, t is iterations, i, j, k and l are natural number and 1≤i≤n, 1≤j≤m, 1≤k≤r, 1≤l≤r, n is the number of image pattern during namely the columns of sample characteristics matrix X is gathered, m is the line number of sample characteristics matrix X and the Characteristic Number of each image pattern, and r is the dimension after the line number of matrix of coefficients V and sample characteristics matrix X dimensionality reduction, and α, β and λ are default interative computation coefficient.
3. non-negative view data dimension reduction method according to claim 2, is characterized in that: described interative computation factor alpha, β and λ meet following relational expression:
λ=αβ。
4. non-negative view data dimension reduction method according to claim 2, is characterized in that: the stopping criterion for iteration of described Non-negative Matrix Factorization iterative algorithm is as follows:
O t - O t - 1 O t - 1 < &rho;
Wherein: Tr () represents matrix trace in (), and I is unit matrix, γ is interative computation coefficient and is practical experience value, O tand O t-1be respectively the object judgement matrix of the t time iteration and the t-1 time iteration, ρ is default convergence threshold.
CN201510293897.6A 2015-06-01 2015-06-01 It is a kind of that the non-negative view data dimension reduction method optimized with A is constrained based on Hessen canonical Active CN104951651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510293897.6A CN104951651B (en) 2015-06-01 2015-06-01 It is a kind of that the non-negative view data dimension reduction method optimized with A is constrained based on Hessen canonical

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510293897.6A CN104951651B (en) 2015-06-01 2015-06-01 It is a kind of that the non-negative view data dimension reduction method optimized with A is constrained based on Hessen canonical

Publications (2)

Publication Number Publication Date
CN104951651A true CN104951651A (en) 2015-09-30
CN104951651B CN104951651B (en) 2017-08-15

Family

ID=54166305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510293897.6A Active CN104951651B (en) 2015-06-01 2015-06-01 It is a kind of that the non-negative view data dimension reduction method optimized with A is constrained based on Hessen canonical

Country Status (1)

Country Link
CN (1) CN104951651B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016261A (en) * 2017-04-11 2017-08-04 曲阜师范大学 Difference expression gene discrimination method based on joint constrained non-negative matrix decomposition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411610A (en) * 2011-10-12 2012-04-11 浙江大学 Semi-supervised dimensionality reduction method for high dimensional data clustering
CN102779162A (en) * 2012-06-14 2012-11-14 浙江大学 Matrix concept decomposition method with local area limit

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411610A (en) * 2011-10-12 2012-04-11 浙江大学 Semi-supervised dimensionality reduction method for high dimensional data clustering
CN102779162A (en) * 2012-06-14 2012-11-14 浙江大学 Matrix concept decomposition method with local area limit

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DENG CAI等: "Graph Regularized Nonnegative Matrix Factorization for Data Representation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
HAIFENG LIU等: "A-Optimal Non-negative Projection for Image Representation", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(2012)》 *
HAIFENG LIU等: "Local Coordinate Concept Factorization for Image Representation", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 *
HAIFENG LIU等: "Locality-Constrained Concept Factorization", 《PROCEEDINGS OF THE TWENTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
KWANG IN KIM等: "Semi-supervised Regression using Hessian Energy with an Application to Semi-supervised Dimensionality Reductio", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 22(NIPS 2009)》 *
ZHENG YANG等: "Matrix Completion for Cross-view Pairwise Constraint Propagation", 《ACM INTERNATIONAL CONFERENCE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016261A (en) * 2017-04-11 2017-08-04 曲阜师范大学 Difference expression gene discrimination method based on joint constrained non-negative matrix decomposition
CN107016261B (en) * 2017-04-11 2019-10-11 曲阜师范大学 Difference expression gene discrimination method based on joint constrained non-negative matrix decomposition

Also Published As

Publication number Publication date
CN104951651B (en) 2017-08-15

Similar Documents

Publication Publication Date Title
Peng et al. A unified framework for representation-based subspace clustering of out-of-sample and large-scale data
Yuan et al. Projective nonnegative matrix factorization for image compression and feature extraction
He et al. Nonnegative sparse coding for discriminative semi-supervised learning
Zelnik-Manor et al. Sensing matrix optimization for block-sparse decoding
CN111461157B (en) Self-learning-based cross-modal Hash retrieval method
CN113159163A (en) Lightweight unsupervised anomaly detection method based on multivariate time series data analysis
Adler et al. Probabilistic subspace clustering via sparse representations
Shah et al. Abnormality detection using deep neural networks with robust quasi-norm autoencoding and semi-supervised learning
CN102411610A (en) Semi-supervised dimensionality reduction method for high dimensional data clustering
CN104298977A (en) Low-order representing human body behavior identification method based on irrelevance constraint
CN112163114B (en) Image retrieval method based on feature fusion
CN102495876A (en) Nonnegative local coordinate factorization-based clustering method
CN103761532A (en) Label space dimensionality reducing method and system based on feature-related implicit coding
CN113392191B (en) Text matching method and device based on multi-dimensional semantic joint learning
Dan et al. PF‐ViT: Parallel and Fast Vision Transformer for Offline Handwritten Chinese Character Recognition
Zhao et al. Novel orthogonal based collaborative dictionary learning for efficient face recognition
Shu et al. Local and global regularized sparse coding for data representation
CN104951651A (en) Non-negative image data dimension reduction method based on Hessian regular constraint and A optimization
Amor et al. Multifont Arabic Characters Recognition Using HoughTransform and HMM/ANN Classification.
Zhang et al. Spectral error correcting output codes for efficient multiclass recognition
CN108009586B (en) Capping concept decomposition method and image clustering method
CN115017366A (en) Unsupervised video hash retrieval method based on multi-granularity contextualization and multi-structure storage
Liu et al. Multi-digit recognition with convolutional neural network and long short-term memory
Ren et al. Robust projective low-rank and sparse representation by robust dictionary learning
Rong et al. A vehicle type recognition method based on sparse auto encoder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant