CN111325275A - Robust image classification method and device based on low-rank two-dimensional local discriminant map embedding - Google Patents

Robust image classification method and device based on low-rank two-dimensional local discriminant map embedding Download PDF

Info

Publication number
CN111325275A
CN111325275A CN202010105990.0A CN202010105990A CN111325275A CN 111325275 A CN111325275 A CN 111325275A CN 202010105990 A CN202010105990 A CN 202010105990A CN 111325275 A CN111325275 A CN 111325275A
Authority
CN
China
Prior art keywords
matrix
rank
class
low
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010105990.0A
Other languages
Chinese (zh)
Other versions
CN111325275B (en
Inventor
万鸣华
杨国为
詹天明
杨章静
张凡龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANJING AUDIT UNIVERSITY
Original Assignee
NANJING AUDIT UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANJING AUDIT UNIVERSITY filed Critical NANJING AUDIT UNIVERSITY
Priority to CN202010105990.0A priority Critical patent/CN111325275B/en
Publication of CN111325275A publication Critical patent/CN111325275A/en
Application granted granted Critical
Publication of CN111325275B publication Critical patent/CN111325275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Abstract

The invention discloses a robust image classification method and a robust image classification device based on low-rank two-dimensional local discriminant image embedding, wherein an image library unit is constructed by the following steps: the method comprises the steps of obtaining a standard image library and constructing a new standard image library to be classified; the first calculation unit: intra-class divergence matrix S for calculating new standard images to be classifiedwAnd between-class divergence matrix SbThe difference J (P); a first image processing unit: the method comprises the steps of performing low-rank matrix decomposition on an acquired image X to obtain a low-rank matrix A and a sparse matrix E; a second calculation unit: and combining the results of the first computing unit and the first image processing unit to obtain a final objective function: a feature matrix calculation unit: solving a feature matrix Y; a nearest neighbor classifier unit: the system is used for classifying the images by utilizing the nearest neighbor classifier and outputting the classification result of the images. The invention solves the problems of low classification precision, noise points and odds and ends existing in the image classification based on the 2DLPP learning modelThe technical problem of outlier improves the identification precision.

Description

Robust image classification method and device based on low-rank two-dimensional local discriminant map embedding
Technical Field
The invention relates to a robust image classification method and device based on low-rank two-dimensional local discriminant map embedding.
Background
In recent decades, to solve the problem of "dimension disaster" in machine learning, image processing, computer vision and pattern recognition, many projection-based linear feature extraction techniques have been developed, including Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and extended versions thereof, such as two-dimensional PCA (2DPCA), two-dimensional LDA (2DLDA), etc. However, linear techniques may not be able to find the basic non-linear data structure, and in practical applications, the non-linear data includes non-gaussian and manifold value data, and therefore, representative non-linear manifold learning techniques are proposed to reveal hidden semantics while preserving the geometry of the manifold. Among manifold learning theories and techniques, Local Linear Embedding (LLE), isomat, and Laplacian feature mapping (LE) are the most popular manifold learning techniques, however, all of these non-linear techniques suffer from generalization capability problems.
The graph embedding framework emphasizes the importance of constructing similarity matrices, proposes a new graph embedding formula, and proposes Local Preserving Projection (LPP), linearization of Laplace eigenmap, such as Neighborhood Preserving Projection (NPP) and LLE, such as Neighborhood Preserving Embedding (NPE), to solve the sample generalization problem. Two-dimensional local projection (2DLPP) is proposed for linear dimensionality reduction based on 2DPCA and 2DLDA which act directly on a two-dimensional image matrix, however, 2DLPP has some problems, first, it is an unsupervised method, without considering discrimination information of an identification task; second, 2DLPP is very sensitive to outliers because it measures the similarity of projection data pairs using the L2 norm criterion; finally, it has singularity, and there is a problem of not solving eigenvalues.
In addition, in practical applications, data is often corrupted by noise or outliers, which may negatively affect critical information and degrade the performance of the algorithm. However, the one-dimensional vector-based method and the two-dimensional matrix-based method described above use the L2 norm as a metric, and are sensitive to noise or outliers. In the prior art, many dimension reduction methods using the L1 norm as a distance criterion have been proposed. For example, the sensitivity of noise and outliers in the data is overcome by solving an optimization problem instead of the L2 norm to search for local extrema; for example, the rotation invariant L1 norm PCA (R1-PCA) has some properties not shared with PCA-L1, generalizing the L1 norm based 2DPCA (2DPCA-L1) to the two dimensional case. For example, in order to improve the robustness of 2DLPP to abnormal values and corrosion, a two-dimensional local preserving projection (2DLPP-L1) is proposed, and a two-dimensional discriminant LPP method (2D-DLPP-L1) based on an L1 norm effectively preserves the spatial topology of an image.
The method based on the L1 norm is superior to the method based on the L2 norm in robust feature extraction, and the method based on the L1 norm searches sparse representation to ensure that test points can be represented by training samples of the same class. However, they cannot recover a clean matrix from noisy data. In contrast, the low rank representation shows good performance in terms of matrix recovery. A nuclear norm based 2DPCA (N-2DPCA) method uses the nuclear norm to describe the reconstruction error, and improved representation of the image is proposed, however, in practical applications, most of the above methods are susceptible to illumination, corrosion, or noise.
In recent years, many Low Rank Representation (LRR) methods have received attention for their robustness to noisy data. The method assumes that data points are located on a low-dimensional subspace, then a representation matrix of the data points is low-rank, a single subspace clustering problem is introduced into multi-subspace clustering, and the proposed LRR can better capture the global structure of data and recover the lowest-rank representation of the data. For example, patent application No. 201811269217.7 discloses a hyperspectral image classification method based on a low-rank sparse information combination network, which takes data low-rank information and sparse information into consideration and performs preprocessing to obtain a certain recognition accuracy. The patent with the application number of 201510884791.3 discloses a robust face image principal component feature extraction method and a robust face image principal component feature recognition device, wherein noise is removed by simultaneously considering the low rank and sparse characteristics of face image training sample data, and a good face recognition effect is obtained. However, the disclosed method is an unsupervised algorithm and does not take into account the class of data.
Disclosure of Invention
In order to solve the problems, the invention provides a robust image classification method and a robust image classification device based on low-rank two-dimensional local identification map embedding, which comprehensively consider discrimination information in map embedding and the low-rank property of data in image classification, are used for solving the technical problems of low classification precision and noise points and singular points in the existing image classification based on a 2DLPP learning model, and not only solve the noise of a sample, but also consider the category of the sample.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
the robust image classification method based on low-rank two-dimensional local discriminant map embedding comprises the following steps:
1) acquiring a standard image library, and constructing a new standard image library to be classified;
2) and (3) aiming at the new standard image to be classified, performing the following processing:
21) calculating an intra-class divergence matrix S of a new standard image to be classifiedwAnd between-class divergence matrix SbDifference J (P):
Figure BDA0002387835800000031
wherein, P is a projection matrix,
Figure BDA0002387835800000032
the value of the minimum loss function P is calculated, gamma is an adjusting parameter, and gamma is more than 0 and less than 1;
22) carrying out low-rank matrix decomposition on the acquired image X to obtain a low-rank matrix A and a sparse matrix E:
Figure BDA0002387835800000033
wherein s.t. represents a constraint condition symbol,
Figure BDA0002387835800000034
expressing the value of the minimum loss functions A and E, | · | | non-woven phosphor*Represents the kernel norm, | ·| luminance1Represents the norm of L1, β represents the adjustable parameter;
23) combining the results of 21) and 22) to obtain a final objective function:
Figure BDA0002387835800000035
s.t.X=A+E
wherein α represents the adjustable parameter, rank (A) represents the rank of matrix A;
24) from Yi=PTXiAnd obtaining the characteristic matrix Y ═ Y1,…,Yi,…,YN)T
wherein ,PTA transposed matrix representing P, YiRepresenting an ith post-projection sample matrix; n represents the total number of samples; xiRepresenting an ith training sample matrix;
3) and classifying the images by using a nearest neighbor classifier, and outputting the classification result of the images.
Preferably, 21) specifically comprises the following steps:
211) constructing an intra-class compact graph, and constructing the intra-class compact graph through the following graph embedding formula:
Figure BDA0002387835800000041
wherein ,
Figure BDA0002387835800000042
Figure BDA0002387835800000043
represents a sample XiIn the same class of KcNumber of nearest neighbor samples, picRepresents the number of samples belonging to class c; i | · | purple wind2Represents the L2 norm; dC and WCThe diagonal matrix and the weight matrix are represented separately,
Figure BDA0002387835800000044
kronecker product, I, of the representation matrixnAn identity matrix of order n, Lc=Dc-Wc
212) Constructing an edge separation graph, and constructing the edge separation graph through the following graph embedding formula:
Figure BDA0002387835800000045
Figure BDA0002387835800000051
wherein ,
Figure BDA0002387835800000052
Figure BDA0002387835800000053
is represented by KpMore recently in
Figure BDA00023878358000000510
Data pair of (1), KpRepresentation and sample XiNumber of nearest neighbor samples of different classes, pitRepresenting the number of samples belonging to class t, DpRepresenting a diagonal matrix, WpRepresenting a weight matrix, Lp=Dp-Wp
213) Calculating the optimal J (P):
J(p)=mintr[Sw-γSb]
where tr [. cndot. ] represents the trace of the matrix.
Preferably, 23) specifically comprises the following steps:
231) constructing a final objective function of a low-rank two-dimensional local discriminant map embedding algorithm:
Figure BDA0002387835800000054
s.t.X=A+E,A=B,Yi=PTAi
wherein ,
Figure BDA0002387835800000055
the minimum loss function A, E and P are shown,
Figure BDA0002387835800000056
a matrix of weights within the representation class,
Figure BDA0002387835800000057
representing inter-class weight matrices, B representing noiseless matrices, AiRepresenting the ith noise-free sample matrix;
232) constructing an augmented Lagrange multiplier function L (P, B, E, A, M)1,M2,μ):
Figure BDA0002387835800000058
Where μ > 0 is a penalty parameter, M1 and M2Is a lagrange multi-multiplier and is,
Figure BDA0002387835800000059
denotes the F norm, LwRepresents the intra-class Laplace matrix, LbRepresenting an inter-class laplacian matrix;
233) variables B, E, P and a are solved for.
Preferably, 3) specifically comprises the following steps:
31) definition of d (Y)1,Y2) Comprises the following steps:
Figure BDA0002387835800000061
wherein ,
Figure BDA0002387835800000062
Y1is a feature matrix;
Figure BDA0002387835800000063
Y2is a feature matrix;
Figure BDA0002387835800000064
is Y1The kth column feature matrix of (1);
Figure BDA0002387835800000065
is Y2The kth column feature matrix of (1); d is a characteristic value, | ·| non-woven phosphor2Is the norm of L2;
32) if the total characteristic distance is Y1,Y2,…,YNEach image has a class label ciCorresponding to a new test sample Y, if
Figure BDA0002387835800000066
And Y isj∈clThen the classification result is Y ∈ cl, wherein ,
Figure BDA0002387835800000067
to find the minimum loss function j, clIs class I;
33) and solving the final classes of all the images, and outputting the classification result of the images.
The robust image classification transpose based on low-rank two-dimensional local discriminant map embedding comprises the following steps:
constructing an image library unit: the method comprises the steps of obtaining a standard image library and constructing a new standard image library to be classified;
the first calculation unit: intra-class divergence matrix S for calculating new standard images to be classifiedwAnd between-class divergence matrix SbThe difference J (P);
a first image processing unit: the method comprises the steps of performing low-rank matrix decomposition on an acquired image X to obtain a low-rank matrix A and a sparse matrix E;
a second calculation unit: and combining the results of the first computing unit and the first image processing unit to obtain a final objective function:
Figure BDA0002387835800000068
s.t.X=A+E
feature matrix calculation unit: for according to Yi=PTXiAnd obtaining the characteristic matrix Y ═ Y1,…,Yi,…,YN)T
A nearest neighbor classifier unit: the system is used for classifying the images by utilizing the nearest neighbor classifier and outputting the classification result of the images.
Preferably, the first calculation unit includes:
constructing an intra-class compact graph unit: the method is used for constructing the intra-class compact graph through a graph embedding formula;
constructing an edge separation graph unit: for constructing an edge separation graph by a graph embedding formula;
a calculation unit: for computing optimal j (p) from the intra-class compact graph and the edge separation graph.
Preferably, the second calculation unit includes:
constructing a final objective function unit: the final objective function is used for constructing a low-rank two-dimensional local discriminant map embedding algorithm;
constructing an augmented Lagrange multiplier function unit: for constructing augmented Lagrange multiplier function L (P, B, E, A, M)1,M2,μ);
A solving unit: for solving for variables B, E, P and a.
A computer readable storage medium comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of the above.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of the above.
The invention has the beneficial effects that:
firstly, in order to overcome the sensitivity of a 2DLPP method, the invention combines low-rank learning and robust learning, introduces the low rank into the 2DLPP, and provides a new dimension reduction method called low-rank two-dimensional local discriminant map embedding (LR-2DLDGE), wherein the method comprehensively considers the discrimination information in the map embedding and the low rank of data in image classification. First, an intra-class graph and an inter-class graph are constructed, which may retain local neighborhood discrimination information. Secondly, the given data is divided into a low-order characteristic coding part and an error part which ensures the sparseness of noise. A large number of experiments were performed on multiple standard image databases using the method to verify the performance of the proposed method. Experimental results show that the method has strong robustness on noise points of the image.
Secondly, the image recognition features are extracted by using a robust image classification method model and a design optimization algorithm based on low-rank two-dimensional local identification map embedding, on one hand, an LR-2DLDGE method uses a two-dimensional image matrix, so that the image does not need to be converted into a vector, the method performs feature extraction by using a map embedding method, more image features can be extracted, and the intra-class covariance matrix of the method is reversible, so that the problem of small samples does not exist; on the other hand, the LR-2DLDGE algorithm adopts a low-rank learning algorithm, so that the phenomenon that the recognition rate of the image is reduced due to changes of illumination, expression, posture and the like can be well solved, and the phenomenon that the recognition rate is reduced when the relation between data sample points far away is weak or the data sample points between neighborhoods are not sufficiently overlapped can be solved.
Thirdly, the invention utilizes the nearest neighbor classifier to classify, which can effectively improve the classification precision of the image and promote the further excavation of the sparse characteristic of the image.
Fourth, the technical problems of low classification precision and noise points and singular points in image classification cannot be solved by the existing subspace learning, graph embedding learning and low-rank learning models, the technical problems of low classification precision and noise points and singular points in the existing image classification based on the 2DLPP learning model are solved, the identification precision is improved, and the method can be used in the fields of national public safety, social safety, information safety, financial safety, human-computer interaction and the like, and has a good application prospect.
Drawings
FIG. 1 is a schematic diagram of a robust image classification method based on low-rank two-dimensional local discriminant image embedding according to the present invention;
FIG. 2 is a flow chart of a rain based low-rank two-dimensional local discriminant graph embedded robust image classification method of the present invention;
FIG. 3 is 10 images of a sub-class in the ORL face image library;
FIG. 4 is a partial image of the USPS handwriting library;
fig. 5 is a partial image in the PolyU palm print library.
Detailed Description
The present invention will be better understood and implemented by those skilled in the art by the following detailed description of the technical solution of the present invention with reference to the accompanying drawings and specific examples, which are not intended to limit the present invention.
The robust image classification method based on low-rank two-dimensional local discriminant map embedding comprises the following steps:
1) acquiring a standard image library, and constructing a new standard image library to be classified;
2) and (3) aiming at the new standard image to be classified, performing the following processing:
21) calculating an intra-class divergence matrix S of a new standard image to be classifiedwAnd between-class divergence matrix SbDifference J (P):
Figure BDA0002387835800000091
wherein, P is a projection matrix,
Figure BDA0002387835800000092
the value of the minimum loss function P is calculated, gamma is an adjusting parameter, and gamma is more than 0 and less than 1;
22) carrying out low-rank matrix decomposition on the acquired image X to obtain a low-rank matrix A and a sparse matrix E:
Figure BDA0002387835800000093
wherein s.t. represents a constraint condition symbol,
Figure BDA0002387835800000094
the value representing the minimum loss function A and E is expressed, | · | | non-calculation*Represents the kernel norm, | ·| luminance1Represents the norm of L1, β represents the adjustable parameter;
23) combining the results of 21) and 22) to obtain a final objective function:
Figure BDA0002387835800000095
s.t.X=A+E
wherein α represents an adjustable parameter, rank () represents the rank of matrix A;
24) from Yi=PTXiAnd obtaining the characteristic matrix Y ═ Y1,…,Yi,…,YN)T
wherein ,PTA transposed matrix representing P, YiRepresenting an ith post-projection sample matrix; n represents the total number of samples; xiRepresenting an ith training sample matrix;
3) and classifying the images by using a nearest neighbor classifier, and outputting the classification result of the images.
The method extracts image identification characteristics by using a robust image classification method model based on low-rank two-dimensional local identification map embedding and a design optimization algorithm, on one hand, an LR-2DLDGE method uses a two-dimensional image matrix, so that the image does not need to be converted into a vector, the method performs characteristic extraction by using a map embedding method, more image characteristics can be extracted, and the intra-class covariance matrix of the method is reversible, so that the problem of small samples does not exist; on the other hand, the LR-2DLDGE algorithm adopts a low-rank learning algorithm, so that the phenomenon that the recognition rate of the image is reduced due to changes of illumination, expression, posture and the like can be well solved, and the phenomenon that the recognition rate is reduced when the relation between data sample points far away is weak or the data sample points between neighborhoods are not sufficiently overlapped can be solved. This is described in detail below with reference to fig. 1-5.
First, data acquisition and pre-processing are performed: and acquiring a standard image library (such as an AR face, USPS handwriting and a PolyU palm print), and taking the acquired standard AR face image library as an example, shearing the standard image library to construct a new standard image library to be classified.
Secondly, feature extraction and feature selection are carried out on the new standard face image to be classified: as shown in fig. 2, obtaining a face image library for training and testing, and obtaining an optimal image feature through a low-rank two-dimensional local discriminant image embedding feature extraction model specifically includes:
21) calculating an intra-class divergence matrix S of a new standard image to be classifiedwAnd between-class divergence matrix SbDifference J (P):
Figure BDA0002387835800000101
wherein, P is a projection matrix,
Figure BDA0002387835800000102
the value of the minimum loss function P is calculated, gamma is an adjusting parameter, and gamma is more than 0 and less than 1;
preferably, 21) specifically comprises the following steps:
211) constructing an intra-class compact graph, firstly introducing intra-class graph compression which can be intra-class data, and constructing the intra-class compact graph through the following graph embedding formula:
Figure BDA0002387835800000111
wherein ,
Figure BDA0002387835800000112
Figure BDA0002387835800000113
represents a sample XiIn the same class of KcNumber of nearest neighbor samples, picRepresents the number of samples belonging to class c; i | · | purple wind2Represents the L2 norm; dC and WCThe diagonal matrix and the weight matrix are represented separately,
Figure BDA0002387835800000114
kronecker product, I, of the representation matrixnAn identity matrix of order n, Lc=Dc-Wc
212) Constructing an edge separation graph, and constructing the edge separation graph through the following graph embedding formula:
Figure BDA0002387835800000115
wherein ,
Figure BDA0002387835800000116
Figure BDA0002387835800000117
represents KpMore recently in
Figure BDA0002387835800000118
Data pair (data pair, i.e. two samples not in the same class), KpRepresents this XiNumber of nearest neighbor samples of different classes, pitRepresenting the number of samples belonging to class t, DpRepresenting a diagonal matrix, WpRepresenting a weight matrix, Lp=Dp-Wp
213) Embedding: the optimal projection can be obtained by:
J(p)=mintr[Sw-γSb]
where tr [. cndot. ] represents the trace of the matrix.
In order to improve the precision of sparse expression in the face recognition process, 22) performing low-rank matrix decomposition on the acquired image X to obtain a low-rank matrix A and a sparse matrix E:
assuming that the matrix X can be decomposed into two matrices, i.e. (in other words) X ═ a + E, a is a low rank matrix and E is a sparse (noise) matrix, the low rank matrix recovery aims to find a low rank a approximate representation X, and the low rank matrix recovery can be considered as the following optimization problem:
Figure BDA0002387835800000121
wherein ,
Figure BDA0002387835800000122
expressing the value of the minimum loss functions A and E, lambda expressing the adjustable parameter, | · caldoes |, the Y0Representing the L0 norm.
The above is an NP-hard problem, which can be solved by the following equation if the matrix a is low rank and E is sparse:
Figure BDA0002387835800000123
wherein s.t. represents a constraint condition symbol, | | A | | Y*A kernel norm represented as A, which can approximately represent the rank of A, | E | | survival1Is L1 norm, can approximately substitute | | | E | | non-magnetism0
23) Combining the results of 21) and 22) to obtain a final objective function:
Figure BDA0002387835800000124
s.t.X=A+E
wherein α represents an adjustable parameter, rank () represents the rank of matrix A;
preferably, 23) specifically comprises the following steps:
231) constructing a final objective function of a low-rank two-dimensional local discriminant map embedding algorithm:
Figure BDA0002387835800000125
s.t.X=A+E,A=B,Yi=PTAi
wherein ,
Figure BDA0002387835800000131
the minimum loss function A, E and P are shown,
Figure BDA0002387835800000132
a matrix of weights within the representation class,
Figure BDA0002387835800000133
representing inter-class weight matrices, B representing noiseless matrices, AiRepresenting the ith noise-free sample matrix;
232) constructing an augmented Lagrange multiplier function L (P, B, E, A, M)1,M2,μ):
The augmented Lagrange multiplier function of the LR-2DLDGE algorithm is:
Figure BDA0002387835800000134
where μ > 0 is a penalty parameter, M1 and M2Is a lagrange multi-multiplier and is,
Figure BDA0002387835800000135
denotes the F norm, LwRepresents the intra-class Laplace matrix, LbRepresenting an inter-class laplacian matrix;
233) solving for variables B, E, P and A:
(1) solving a variable B:
fixing all variables except B, the solution equation for B can be expressed as:
Figure BDA0002387835800000136
the solution to the above equation can be found by singular value decomposition SVD:
Figure BDA0002387835800000137
wherein ,
Figure BDA0002387835800000138
u is unitary matrix of m × m order, sigma is half positive definite m × n
Diagonal matrix of order V is unitary matrix of order n × n, sigmajIs the positive singular value and r is the matrix rank.
(2) Solving a variable E:
fixing all variables except E, the solution equation for E can be expressed as:
Figure BDA0002387835800000141
we can solve the above equation directly with a contraction operator, we define a soft threshold operator Sε[X]Max (| X | -epsilon, 0), there is a solution of the following closed form:
Figure BDA0002387835800000142
wherein sign is a sign function and epsilon is a constant.
(3) Solving a variable P:
fixing all variables except for P, the solution equation for P can be expressed as:
Figure BDA0002387835800000143
rewriting the above equation:
Figure BDA0002387835800000144
one constraint is added as follows:
PT(X-E)(Dw-Db)(X-E)TP=1
the final equation can be obtained:
Figure BDA0002387835800000145
the above solution can be found by the following generalized eigenvalue problem:
Figure BDA0002387835800000146
wherein Λ denotes a feature value set, LWRepresenting intra-class Laplace matrix, I representing identity matrix, DWRepresenting intra-class diagonal matrices, DBRepresenting an inter-class diagonal matrix.
(4) Solving a variable A:
fixing all variables except A, the solution equation for A can be expressed as:
Figure BDA0002387835800000151
by setting derivatives
Figure BDA0002387835800000152
The following can be obtained:
Figure BDA0002387835800000153
wherein
Figure BDA0002387835800000154
And
Figure BDA0002387835800000155
a is essentially by solving the Sylvester equation.
3) The invention utilizes the nearest neighbor classifier to classify the images, and outputs the classification result of the images. Preferably, 3) specifically comprises the following steps:
31) definition of d (Y)1,Y2) Comprises the following steps:
Figure BDA0002387835800000156
wherein ,
Figure BDA0002387835800000157
Y1is a feature matrix;
Figure BDA0002387835800000158
Y2is a feature matrix;
Figure BDA0002387835800000159
is Y1The kth column feature matrix of (1);
Figure BDA00023878358000001510
is Y2The kth column feature matrix of (1); d is a characteristic value, | ·| non-woven phosphor2Is the norm of L2;
32) if the total characteristic distance is Y1,Y2,…,YNEach image has a class label ciCorresponding to a new test sample Y, if
Figure BDA00023878358000001511
And Y isj∈clThen the classification result is Y ∈ cl, wherein ,
Figure BDA00023878358000001512
to find the minimum loss function j, clIs class I;
33) according to the above 31) and 32), solving the final classes of all face images, and outputting the classification results of the face images.
The invention solves the technical problems of low classification precision and noise points and singular points in the image classification based on the 2DLPP learning model, and improves the identification precision.
Correspondingly, a robust image classification transpose based on low-rank two-dimensional local discriminant image embedding includes:
constructing an image library unit: the method comprises the steps of obtaining a standard image library and constructing a new standard image library to be classified;
the first calculation unit: intra-class divergence matrix S for calculating new standard images to be classifiedwAnd between-class divergence matrix SbThe difference J (P);
a first image processing unit: the method comprises the steps of performing low-rank matrix decomposition on an acquired image X to obtain a low-rank matrix A and a sparse matrix E;
a second calculation unit: and combining the results of the first computing unit and the first image processing unit to obtain a final objective function:
Figure BDA0002387835800000161
s.t.X=A+E
a feature matrix calculation unit: for according to Yi=PTXiAnd obtaining the characteristic matrix Y ═ Y1,…,Yi,…,YN)T
A nearest neighbor classifier unit: the system is used for classifying the images by utilizing the nearest neighbor classifier and outputting the classification result of the images.
Preferably, the first calculation unit includes:
constructing an intra-class compact graph unit: the method is used for constructing the intra-class compact graph through a graph embedding formula;
constructing an edge separation graph unit: for constructing an edge separation graph by a graph embedding formula;
a calculation unit: for computing optimal j (p) from the intra-class compact graph and the edge separation graph.
Preferably, the second calculation unit includes:
constructing a final objective function unit: the final objective function is used for constructing a low-rank two-dimensional local discriminant map embedding algorithm;
constructing an augmented Lagrange multiplier function unit: for constructing augmented Lagrange multiplier function L (P, B, E, A, M)1,M2,μ);
A solving unit: for solving for variables B, E, P and a.
A computer-readable storage medium comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of the above.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the above.
In order to overcome the sensitivity of the 2DLPP method, the invention combines low-rank learning and robust learning, introduces low rank into 2DLPP, and provides a new dimension reduction method called low-rank two-dimensional local discriminant map embedding (LR-2DLDGE), wherein the method comprehensively considers the discrimination information in the map embedding and the low rank of data in image classification. First, an intra-class graph and an inter-class graph are constructed, which may retain local neighborhood discrimination information. Secondly, the given data is divided into a low-order characteristic coding part and an error part which ensures the sparseness of noise. A large number of experiments were performed on multiple standard image databases using the method to verify the performance of the proposed method.
The following experimental analysis is carried out by combining three commonly used databases and compared with the prior art, in order to verify the effectiveness of the embedding algorithm based on the low-rank two-dimensional local discriminant image in image recognition, the experiments of recognition are respectively carried out on an AR face database, a USPS handwriting and a PolyU palm print image database, the classification and recognition performances of the algorithm and 2DPCA,2DPCA-L1,2DLPP,2DLPP-L1 and LRR are compared, wherein all the algorithms are operated for 10 times and adopt Euclidean distance and nearest neighbor classifiers. The experimental environment is as follows: dell PC, CPU: inter Athlon (tm)64Processor, memory: 1024M, matlab7.01.
1. Experiments on the ORL face database
Html consists of 40 people, each of which consists of 10 112 × size grayscale images, some of which were taken at different times, with different degrees of variation in facial expressions, facial details, facial poses, and scales of the face, such as smiling or not, eyes open or closed, glasses worn or not, depth and plane rotations up to 20 °, and scales as much as 10%.
For each person, i (2, 3,4,5) images were selected for training, and the remaining 10-l images were tested, the test results are shown in table 1:
TABLE 1 maximum average recognition rate results for different algorithms on ORL face library
Figure BDA0002387835800000181
2. Experiments on the USPS handwriting database
The USPS handwritten digital image library (http:// www.cs.toronto.edu/. Roweis/data. html) has images with the numbers 0-9, each number has 1100 samples, the size of the image is 16 × 16. 100 samples of each number are selected for experiments, and a part of image with the number "2" is shown in FIG. 4.
In the experiment, l (l ═ 20,30,40,50) samples were randomly selected as training samples, and the remaining 100-l were test samples. The maximum recognition rate and the corresponding dimension are listed in table 2 below.
TABLE 2 maximum average recognition rate results for different algorithms on USPS handwriting library
Figure BDA0002387835800000182
3. Experiments on PolyU palm print database
In the experiment, we selected a sub-library of the PolyU palm print database of hong Kong Physician university, which included 600 images of 100 different palm prints, each of 6 images, the 6 images were taken in two sessions, the first 3 were taken in the first session and the second 3 were taken in the second session, with an average interval of 2 months between the two sessions, the central region of the image was clipped, scaled to 128 × 128 pixels and histogram equalized.
Training is performed on 3 images obtained in the first time period, testing is performed on 3 images obtained in the second time period, and table 3 shows the maximum recognition rate and the corresponding dimension.
TABLE 3 maximum average recognition results for different algorithms on the PolyU palm print library
Figure BDA0002387835800000191
Through the experimental analysis, the image classification accuracy can be effectively improved, the method has the advantage of high recognition rate, can be used in the fields of national public safety, social safety, information safety, financial safety, human-computer interaction and the like, and has a good application prospect.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. The robust image classification method based on low-rank two-dimensional local discriminant map embedding is characterized by comprising the following steps of:
1) acquiring a standard image library, and constructing a new standard image library to be classified;
2) and (3) aiming at the new standard image to be classified, performing the following processing:
21) calculating an intra-class divergence matrix S of a new standard image to be classifiedwAnd between-class divergence matrix SbDifference J (P):
Figure FDA0002387835790000011
wherein, P is a projection matrix,
Figure FDA0002387835790000012
the value of the minimum loss function P is calculated, gamma is an adjusting parameter, and gamma is more than 0 and less than 1;
22) carrying out low-rank matrix decomposition on the acquired image X to obtain a low-rank matrix A and a sparse matrix E:
Figure FDA0002387835790000013
wherein s.t. represents a constraint condition symbol,
Figure FDA0002387835790000014
expressing the value of the minimum loss functions A and E, | · | | non-woven phosphor*Represents the nuclear norm, | |||1Represents the norm of L1, β represents the adjustable parameter;
23) combining the results of 21) and 22) to obtain a final objective function:
Figure FDA0002387835790000015
s.t.X=A+E
wherein α represents the adjustable parameter, rank (A) represents the rank of matrix A;
24) from Yi=PTXiAnd obtaining the characteristic matrix Y ═ Y1,…,Yi,…,YN)T
wherein ,PTA transposed matrix representing P, YiRepresenting an ith post-projection sample matrix; n represents the total number of samples; xiRepresenting an ith training sample matrix;
3) and classifying the images by using a nearest neighbor classifier, and outputting the classification result of the images.
2. The robust image classification method based on low-rank two-dimensional local discriminant map embedding of claim 1, wherein 21) specifically comprises the following steps:
211) constructing an intra-class compact graph, and constructing the intra-class compact graph through the following graph embedding formula:
Figure FDA0002387835790000021
wherein ,
Figure FDA0002387835790000022
Figure FDA0002387835790000023
represents a sample XiIn the same class of KcNumber of nearest neighbor samples, picRepresents the number of samples belonging to class c; i | · | purple wind2Represents the L2 norm; dC and WCRespectively representing diagonal matrix and weightThe matrix is a matrix of a plurality of matrices,
Figure FDA0002387835790000024
kronecker product, I, of the representation matrixnAn identity matrix of order n, Lc=Dc-Wc
212) Constructing an edge separation graph, and constructing the edge separation graph through the following graph embedding formula:
Figure FDA0002387835790000025
wherein ,
Figure FDA0002387835790000026
Figure FDA0002387835790000027
is represented by KpMore recently in
Figure FDA0002387835790000028
Data pair of (1), KpRepresentation and sample XiNumber of nearest neighbor samples of different classes, pitRepresenting the number of samples belonging to class t, DpRepresenting a diagonal matrix, WpRepresenting a weight matrix, Lp=Dp-Wp
213) Calculating the optimal J (P):
J(p)=mintr[Sw-γSb]
where tr [. cndot. ] represents the trace of the matrix.
3. The robust image classification method based on low-rank two-dimensional local discriminant map embedding of claim 2, wherein 23) specifically comprises the following steps:
231) constructing a final objective function of a low-rank two-dimensional local discriminant map embedding algorithm:
Figure FDA0002387835790000031
s.t.X=A+E,A=B,Yi=PTAi
wherein ,
Figure FDA0002387835790000032
the minimum loss function A, E and P are shown,
Figure FDA0002387835790000033
a matrix of weights within the representation class,
Figure FDA0002387835790000034
representing inter-class weight matrices, B representing noiseless matrices, AiRepresenting the ith noise-free sample matrix;
232) constructing an augmented Lagrange multiplier function L (P, B, E, A, M)1,M2,μ):
Figure FDA0002387835790000035
Where μ > 0 is a penalty parameter, M1 and M2Is a lagrange multi-multiplier and is,
Figure FDA0002387835790000036
denotes the F norm, LwRepresents the intra-class Laplace matrix, LbRepresenting an inter-class laplacian matrix;
233) variables B, E, P and a are solved for.
4. The robust image classification method based on low-rank two-dimensional local discriminant map embedding of claim 3, wherein 3) comprises the following steps:
31) definition of d (Y)1,Y2) Comprises the following steps:
Figure FDA0002387835790000041
wherein ,
Figure FDA0002387835790000042
Y1is a feature matrix;
Figure FDA0002387835790000043
Y2is a feature matrix; y is1 kIs Y1The kth column feature matrix of (1);
Figure FDA0002387835790000044
is Y2The kth column feature matrix of (1); d is a characteristic value, | ·| non-woven phosphor2Is the norm of L2;
32) if the total characteristic distance is Y1,Y2,…,YNEach image has a class label ciCorresponding to a new test sample Y, if
Figure FDA0002387835790000045
And Y isj∈clThen the classification result is Y ∈ cl, wherein ,
Figure FDA0002387835790000046
to find the minimum loss function j, clIs class I;
33) and solving the final classes of all the images, and outputting the classification result of the images.
5. The robust image classification transpose based on low-rank two-dimensional local discriminant map embedding is characterized by comprising the following steps of:
constructing an image library unit: the method comprises the steps of obtaining a standard image library and constructing a new standard image library to be classified;
the first calculation unit: intra-class divergence matrix S for calculating new standard images to be classifiedwAnd between-class divergence matrix SbThe difference J (P);
a first image processing unit: the method comprises the steps of performing low-rank matrix decomposition on an acquired image X to obtain a low-rank matrix A and a sparse matrix E;
a second calculation unit: and combining the results of the first computing unit and the first image processing unit to obtain a final objective function:
Figure FDA0002387835790000047
s.t.X=A+E
a feature matrix calculation unit: for according to Yi=PTXiAnd obtaining the characteristic matrix Y ═ Y1,…,Yi,…,YN)T
A nearest neighbor classifier unit: the system is used for classifying the images by utilizing the nearest neighbor classifier and outputting the classification result of the images.
6. The robust image classification transpose based on low-rank two-dimensional local discriminant map embedding of claim 5, wherein the first computing unit comprises:
constructing an intra-class compact graph unit: the method is used for constructing the intra-class compact graph through a graph embedding formula;
constructing an edge separation graph unit: for constructing an edge separation graph by a graph embedding formula;
a calculation unit: for computing optimal j (p) from the intra-class compact graph and the edge separation graph.
7. The robust image classification transpose based on low-rank two-dimensional local discriminant map embedding of claim 6, wherein the second computing unit comprises:
constructing a final objective function unit: the final objective function is used for constructing a low-rank two-dimensional local discriminant map embedding algorithm;
constructing an augmented Lagrange multiplier function unit: for constructing augmented Lagrange multiplier function L (P, B, E, A, M)1,M2,μ);
A solving unit: for solving for variables B, E, P and a.
8. A computer-readable storage medium, comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1-4.
9. A computer program product comprising instructions for causing a computer to perform the method according to any one of claims 1-4 when the computer program product is run on the computer.
CN202010105990.0A 2020-02-20 2020-02-20 Robust image classification method and device based on low-rank two-dimensional local identification map embedding Active CN111325275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010105990.0A CN111325275B (en) 2020-02-20 2020-02-20 Robust image classification method and device based on low-rank two-dimensional local identification map embedding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010105990.0A CN111325275B (en) 2020-02-20 2020-02-20 Robust image classification method and device based on low-rank two-dimensional local identification map embedding

Publications (2)

Publication Number Publication Date
CN111325275A true CN111325275A (en) 2020-06-23
CN111325275B CN111325275B (en) 2023-05-23

Family

ID=71171549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010105990.0A Active CN111325275B (en) 2020-02-20 2020-02-20 Robust image classification method and device based on low-rank two-dimensional local identification map embedding

Country Status (1)

Country Link
CN (1) CN111325275B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364902A (en) * 2020-10-30 2021-02-12 太原理工大学 Feature selection learning method based on self-adaptive similarity
CN114022701A (en) * 2021-10-21 2022-02-08 南京审计大学 Image classification method based on neighbor supervision discrete discrimination Hash
CN115019368A (en) * 2022-06-09 2022-09-06 南京审计大学 Face recognition feature extraction method in audit investigation based on 2DESDLPP

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008394A (en) * 2014-05-20 2014-08-27 西安电子科技大学 Semi-supervision hyperspectral data dimension descending method based on largest neighbor boundary principle
CN105469117A (en) * 2015-12-03 2016-04-06 苏州大学 Image recognition method and device based on robust characteristic extraction
CN105469034A (en) * 2015-11-17 2016-04-06 西安电子科技大学 Face recognition method based on weighted diagnostic sparseness constraint nonnegative matrix decomposition
CN106056131A (en) * 2016-05-19 2016-10-26 西安电子科技大学 Image feature extraction method based on LRR-LDA

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008394A (en) * 2014-05-20 2014-08-27 西安电子科技大学 Semi-supervision hyperspectral data dimension descending method based on largest neighbor boundary principle
CN105469034A (en) * 2015-11-17 2016-04-06 西安电子科技大学 Face recognition method based on weighted diagnostic sparseness constraint nonnegative matrix decomposition
CN105469117A (en) * 2015-12-03 2016-04-06 苏州大学 Image recognition method and device based on robust characteristic extraction
CN106056131A (en) * 2016-05-19 2016-10-26 西安电子科技大学 Image feature extraction method based on LRR-LDA

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MINGHUA WAN 等: "Two-Dimensional Local Graph Embedding Analysis(2DLGEA) For Face Recognition" *
万鸣华 等: "最大间距准则框架下的多流形局部图嵌入(MLGE/MMC)算法" *
王胜 等: "局部保持分类投影的人脸识别算法" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364902A (en) * 2020-10-30 2021-02-12 太原理工大学 Feature selection learning method based on self-adaptive similarity
CN112364902B (en) * 2020-10-30 2022-11-15 太原理工大学 Feature selection learning method based on self-adaptive similarity
CN114022701A (en) * 2021-10-21 2022-02-08 南京审计大学 Image classification method based on neighbor supervision discrete discrimination Hash
CN114022701B (en) * 2021-10-21 2022-06-24 南京审计大学 Image classification method based on neighbor supervision discrete discrimination Hash
CN115019368A (en) * 2022-06-09 2022-09-06 南京审计大学 Face recognition feature extraction method in audit investigation based on 2DESDLPP
CN115019368B (en) * 2022-06-09 2023-09-12 南京审计大学 Face recognition feature extraction method in audit investigation

Also Published As

Publication number Publication date
CN111325275B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
Lee et al. Collaborative expression representation using peak expression and intra class variation face images for practical subject-independent emotion recognition in videos
Li et al. Overview of principal component analysis algorithm
Xie et al. Learned local Gabor patterns for face representation and recognition
Jin et al. Low-rank matrix factorization with multiple hypergraph regularizer
Li et al. Learning features from covariance matrix of gabor wavelet for face recognition under adverse conditions
Pang et al. Simultaneously learning neighborship and projection matrix for supervised dimensionality reduction
CN111325275B (en) Robust image classification method and device based on low-rank two-dimensional local identification map embedding
Liao et al. Discriminant Analysis via Joint Euler Transform and $\ell_ {2, 1} $-Norm
Radman et al. Robust face pseudo-sketch synthesis and recognition using morphological-arithmetic operations and HOG-PCA
Zhou et al. Improved-LDA based face recognition using both facial global and local information
Chen et al. Low-rank latent pattern approximation with applications to robust image classification
Angadi et al. Face recognition through symbolic modeling of face graphs and texture
ElBedwehy et al. Face recognition based on relative gradient magnitude strength
Jena et al. Implementation of linear discriminant analysis for Odia numeral recognition
Sawant et al. Age estimation using local direction and moment pattern (ldmp) features
Zhang et al. A sparse projection and low-rank recovery framework for handwriting representation and salient stroke feature extraction
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis
Asghari Beirami et al. Ensemble of log-euclidean kernel svm based on covariance descriptors of multiscale gabor features for face recognition
Meng et al. An improved redundant dictionary based on sparse representation for face recognition
Thanh Do et al. Facial feature extraction using geometric feature and independent component analysis
Li et al. Shadow determination and compensation for face recognition
Mavaddati A novel face detection method based on over-complete incoherent dictionary learning
Arora et al. Age invariant face recogntion using stacked autoencoder deep neural network
Wang et al. Kernelized LRR on Grassmann manifolds for subspace clustering
Wang et al. Feature extraction method of face image texture spectrum based on a deep learning algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wu Qiyang

Inventor after: Wan Minghua

Inventor after: Yang Guowei

Inventor after: Zhan Tianming

Inventor after: Yang Zhangjing

Inventor before: Wan Minghua

Inventor before: Yang Guowei

Inventor before: Zhan Tianming

Inventor before: Yang Zhangjing

Inventor before: Zhang Fanlong

CB03 Change of inventor or designer information
CI02 Correction of invention patent application

Correction item: Inventor

Correct: Wu Qiyang|Wan Minghua|Yang Guowei|Zhan Tianming|Yang Zhangjing|Zhang Fanlong

False: Wu Qiyang|Wan Minghua|Yang Guowei|Zhan Tianming|Yang Zhangjing

Number: 18-02

Volume: 39

CI02 Correction of invention patent application
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Wan Minghua

Inventor after: Yang Guowei

Inventor after: Zhan Tianming

Inventor after: Yang Zhangjing

Inventor after: Zhang Fanlong

Inventor before: Wu Qiyang

Inventor before: Wan Minghua

Inventor before: Yang Guowei

Inventor before: Zhan Tianming

Inventor before: Yang Zhangjing

Inventor before: Zhang Fanlong

CB03 Change of inventor or designer information