CN108985161A - A kind of low-rank sparse characterization image feature learning method based on Laplace regularization - Google Patents

A kind of low-rank sparse characterization image feature learning method based on Laplace regularization Download PDF

Info

Publication number
CN108985161A
CN108985161A CN201810588297.6A CN201810588297A CN108985161A CN 108985161 A CN108985161 A CN 108985161A CN 201810588297 A CN201810588297 A CN 201810588297A CN 108985161 A CN108985161 A CN 108985161A
Authority
CN
China
Prior art keywords
matrix
feature
model
low
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810588297.6A
Other languages
Chinese (zh)
Other versions
CN108985161B (en
Inventor
孟敏
兰孟城
武继刚
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201810588297.6A priority Critical patent/CN108985161B/en
Publication of CN108985161A publication Critical patent/CN108985161A/en
Application granted granted Critical
Publication of CN108985161B publication Critical patent/CN108985161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention discloses a kind of low-rank sparse characterization image feature learning method based on Laplace regularization, comprising the following steps: (1) data set is randomly divided into training set and test set;(2) the undirected weight map of training set is constructed, and calculates its Laplacian Matrix;(3) initialization feature extracts matrix, carries out first feature extraction to training set;(4) learning model of a non-negative low-rank sparse characterization is designed;(5) LADMAP optimization method Optimization Learning model is utilized, obtains optimal feature extraction matrix and optimum classifier model parameter;(6) Forecasting recognition is carried out to test set sample, verifies feature extraction effect and nicety of grading.The present invention has the advantages that strong robustness, discrimination is high, wide adaptability, carries out feature extraction to image pattern, the information of the sample of reservation is more, and identification is stronger, can be widely used for target identification, image classification etc..

Description

A kind of low-rank sparse characterization image feature learning method based on Laplace regularization
Technical field
The present invention relates to facial image recognition method, in particular to a kind of low-rank sparse table based on Laplace regularization Levy characteristics of image learning method.
Background technique
At present large-scale characteristics of image in study, the training sample with label is difficult to obtain, and causes some existing Common supervision type feature learning technology is difficult to use, and the training sample with noise further limits their performance.
Usually assume that image sample data is all distributed in independent lower-dimensional subspace in existing method, or approximate leap Multiple lower-dimensional subspaces, and the structure with low-rank sparse.Certain methods are learnt using low-rank sparse constraint characterization learning method The space structure of initial data, while playing the role of denoising to data.It finds the low-rank structure of data by nuclear norm, leads to Cross l1Norm finds the sparsity structure of data, passes through l2,1Norm handles noise can be accurately under certain technical conditions Restore sample data lower-dimensional subspace structure, on this basis, general dimension reduction methods some to the data application of recovery into Row feature extraction has certain robustness.But existing feature learning method is usually not by feature learning and classification task Associated, so that the feature decision learnt is weaker, stability is poor, and the overall performance of these methods is caused to be damaged.
Summary of the invention
To overcome the shortcomings of above-mentioned conventional images analysis method, the invention proposes a kind of new based on Laplce's canonical The low-rank sparse of change characterizes image feature learning method.The present invention makes full use of the label information of small sample to carry out feature learning, And while feature learning, train classification models are calculated so that the feature of the data learnt has more identification and robustness Method is more stable.
In order to solve the above technical problems, technical scheme is as follows:
A kind of low-rank sparse characterization image feature learning method based on Laplace regularization, which is characterized in that including Following steps:
S1: image data set is divided into training set Str={ Xtr,YtrAnd test set Ste={ Xte,Fte, the XtrIt is Training set, the YtrIt is the label of training set, the XteIt is test set, the FteIt is the label for predicting test set;
S2: construction training set StrUndirected weight map G={ V, E }, the V be sample point set, the E be sample side Collection, the Laplacian Matrix L=D-W of the adjacency matrix W and G of G are obtained by undirected weight map G, whereinThe yi,yjIt is the label of i-th of training sample and j-th of training sample respectively, the D is Diagonal matrix, diagonal element are
S3: orthogonal feature extraction matrix P is initialized using principal component analytical method, by matrix P to training set data XtrExtract feature PTXtr
S4: the learning model Γ of a non-negative low-rank sparse characterization is designed based on S1~S3;
S5: the learning model Γ in S4 is optimized using LADMAP optimization method, obtains final feature extraction square Battle array P*With final classifier parameters T*
S6: pass through the final feature extraction matrix P in S5*To test set sample XteExtract feature P*TXte, then will extract The feature P arrived*TXteIt is input to prediction label F in classifierte=T*P*TXte
In a preferred solution, the S4 the following steps are included:
S4.1: in training set data XtrFeature space PTXtrIn, non-negative low-rank sparse constraint representative learning is carried out, is obtained Correlation model D, the correlation model D are expressed by following formula:
Wherein, the rank () is rank function, and the Z is reconstruction coefficients matrix, and the E is reconstructed error square Battle array, the P is orthogonal matrix, and described λ, γ are penalty factors, and the I is unit matrix, described | | | | table Demonstration number;
S4.2: regularization operation is carried out by the Laplacian Matrix L in S2, obtains model D ', institute in conjunction with correlation model D The model D ' stated is expressed by following formula:
s.t.PTXtr=PTXtrZ+E,Z≥0,PTP=I
Wherein, the tr () is trace function, and the β is penalty factor;
S4.3: loss function f (P is added on model D 'TXtr,Ytr, T), obtain learning model Γ, the learning model Γ is expressed by following formula:
s.t.PTXtr=PTXtrZ+E,Z≥0,PTP=I
Wherein, the α is penalty factor.
In this preferred embodiment, due to calculating Laplacian Matrix in the luv space of data in S2, then in S4.2, Laplace regularization is introduced in the feature space of data, maintains data partial structurtes consistency.Secondly, non-negative low-rank is dilute Dredging representative learning model Γ is to operate in training sample set XtrFeature space PTXtrIn rather than in original data space.
In a preferred solution, the f (P in S4.3TXtr,Ytr, T) and it is expressed by following formula:
Described | | | |FIt is F norm.
In this preferred embodiment, sample XtrWith its label YtrIt can be learnt and feature space P that dimension is adjustable by one It links together.Loss function f (PTXtr,Ytr, T) two learning processes of feature learning and sorter model parameter learning are merged Wherein, while feature extraction matrix P is calculated*With final classification device parameter T*, make feature extraction matrix P by one-step optimization*With Classifier parameters T*Reach global optimum.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
1, the present invention carries out non-negative low-rank sparse constraint representative learning to it in the feature space of data, ensure that data Global low-rank structure more accurately has restored the subspace structure of data, has very strong robustness, and Laplce's canonical Xiang Ze considers the local geometry of data, so that the structure of the geometry in data characteristics space and original data space It is consistent as much as possible;
2, the algorithm model in the present invention is by combining feature learning with classifier study, so that feature extraction matrix Reach global optimum with classifier parameters, effectively increases the accuracy and robustness of algorithm;
3, model algorithm of the invention operates in the low-dimensional feature space of data, and the time for significantly reducing algorithm is multiple Miscellaneous degree, reduces the number of iterations.
Detailed description of the invention
Fig. 1 is the present embodiment flow chart.
Fig. 2 is the present embodiment part of test results schematic diagram.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;In order to better illustrate this embodiment, attached Scheme certain components to have omission, zoom in or out, does not represent the size of actual product;
To those skilled in the art, it is to be understood that certain known features and its explanation, which may be omitted, in attached drawing 's.The following further describes the technical solution of the present invention with reference to the accompanying drawings and examples.
ORL face database is by the laboratory Britain Camb Olivetti from April, 1992 to shooting during in April, 1994 A series of facial image compositions, share 40 all ages and classes, different sexes and not agnate object.Everyone 10 width images, Total 400 width gray level images, the resolution ratio of each image are 32 × 32.The present embodiment combination Fig. 1 is described in further detail.
A kind of low-rank sparse characterization image feature learning method based on Laplace regularization, comprising the following steps:
ORL data set is divided into training set S by step S1tr={ Xtr,YtrAnd test set Ste={ Xte,Fte}.For each A object selects 5 width as training sample from its 10 width facial image at random, and remaining 5 width is as test sample, i.e. Xtr Include 40*5=200 width facial image, XteIt equally also include 200 width facial images.Enable X={ Xtr,XteIndicate all people's face Image.In order to verify the robustness of the present embodiment, to all image Xi, i=1,2 ... 400, it is random to add in various degree White gray figure Ai, i.e. Xi+Ai, finally obtain training set and test set with noise.For the ease of next calculating, I By XtrIn each width image array column vector, obtain 200 column vectors, each column represent a width facial image, then by this 200 column vectors are combined into a matrix by column arrangement, are still denoted as Xtr, for XteApply same operation.Ytr∈R40×200, the i-th column It is the label of i-th of sample of training set,Position where 1 is classification belonging to i-th of sample.
Step S2 constructs training set StrUndirected weight map G={ V, E }, V is data point set, a total of 200 points, and E is Side collection,Adjacency matrix W and the corresponding drawing for calculating this undirected weight map are general Lars matrix L=D-W, wherein It is the label of i-th of sample and j-th of sample, D respectively It is diagonal matrix, diagonal element is
Step S3 initializes orthogonal feature extraction matrix P by principal component analytical method.All samples are carried out first Centralization:Calculate its covariance matrixTo covariance matrix Carry out Eigenvalues DecompositionBy the characteristic value acquired by sorting from large to small: λ1≥λ2≥...≥λm, then before taking (d is characterized the dimension in space to d, and a preset value is arranged to the corresponding feature vector of 100) a characteristic value herein and constitutes P= (p1,p2,...,pd).Then using this orthogonal matrix P to XtrIt carries out feature extraction and obtains PTXtr
Step S4-S8 constructs the final model Γ of the present embodiment:
s.t.PTXtr=PTXtrZ+E,Z≥0,PTP=I
By Laplacian Matrix L, PTXtrAnd YtrAs input, model Γ is solved using LADMAP optimization algorithm, it is final to learn Practise feature extraction matrix P*With classifier parameters T*
Step S9 passes through the feature extraction matrix P learnt*To test set data XteCarry out feature extraction P*TXte, then will The feature P extracted*TXteIt is input in classifier and predicts its label Fte=T*P*TXte.For i-th of sample in test setIts predict label beColumn vectorIn position where maximum element be training sampleAffiliated classification.By the classification information of prediction compared with its true classification information, if unanimously, predicted correctly a Number plus 1, finally calculates recognition accuracy:
In the present embodiment, experiment porch is the MATLAB R2017a software in WIN10 system, the model Intel of CPU i7-6700K@4.00GHz.Experimental result is as shown in Fig. 2, be illustrated as partial test collection recognition result, 20 classes, and each class includes 5 samples, wherein blue box is the sample correctly identified, and red block is the sample of identification mistake.
The terms describing the positional relationship in the drawings are only for illustration, should not be understood as the limitation to this patent;
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention Protection scope within.

Claims (3)

1. a kind of low-rank sparse based on Laplace regularization characterizes image feature learning method, which is characterized in that including with Lower step:
S1: image data set is divided into training set Str={ Xtr,YtrAnd test set Ste={ Xte,Fte, the XtrIt is trained Collection, the YtrIt is the label of training set, the XteIt is test set, the FteIt is the label of the test set to be predicted;
S2: construction training set StrUndirected weight map G={ V, E }, the V be sample point set, the E be sample side collection, The Laplacian Matrix L=D-W of the adjacency matrix W and G of G are obtained by undirected weight map G, whereinThe yi,yjIt is the label of i-th of training sample and j-th of training sample respectively, the D is Diagonal matrix, diagonal element are
S3: orthogonal feature extraction matrix P is initialized using principal component analytical method, by matrix P to training set data XtrIt mentions Take feature PTXtr
S4: the learning model Γ of a non-negative low-rank sparse characterization is designed based on S1~S3;
S5: the learning model Γ in S4 is optimized using LADMAP optimization method, obtains final feature extraction matrix P*With Final classifier parameters T*
S6: pass through the final feature extraction matrix P in S5*To test set sample XteExtract feature P*TXte, then will extract Feature P*TXteIt is input to prediction label F in classifierte=T*P*TXte
2. low-rank sparse according to claim 1 characterizes image feature learning method, which is characterized in that the S4 includes Following steps:
S4.1: in training set data XtrFeature space PTXtrIn, non-negative low-rank sparse constraint representative learning is carried out, correlation is obtained Model D, the correlation model D are expressed by following formula:
Wherein, the rank () is rank function, and the Z is reconstruction coefficients matrix, and the E is reconstructed error matrix, The P is orthogonal matrix, and described λ, γ are penalty factors, and the I is unit matrix, described | | | | indicate model Number;
S4.2: carrying out regularization operation by the Laplacian Matrix L in S2, obtain model D ' in conjunction with correlation model D, described Model D ' is expressed by following formula:
s.t.PTXtr=PTXtrZ+E,Z≥0,PTP=I
Wherein, the tr () is trace function, and the β is penalty factor;
S4.3: loss function f (P is added on model D 'TXtr,Ytr, T), learning model Γ is obtained, the learning model Γ is logical Following formula is crossed to be expressed:
s.t.PTXtr=PTXtrZ+E,Z≥0,PTP=I
Wherein, the α is penalty factor.
3. low-rank sparse according to claim 2 characterizes image feature learning method, which is characterized in that the f in S4.3 (PTXtr,Ytr, T) and it is expressed by following formula:
Described | | | |FIt is F norm.
CN201810588297.6A 2018-06-08 2018-06-08 Low-rank sparse representation image feature learning method based on Laplace regularization Active CN108985161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810588297.6A CN108985161B (en) 2018-06-08 2018-06-08 Low-rank sparse representation image feature learning method based on Laplace regularization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810588297.6A CN108985161B (en) 2018-06-08 2018-06-08 Low-rank sparse representation image feature learning method based on Laplace regularization

Publications (2)

Publication Number Publication Date
CN108985161A true CN108985161A (en) 2018-12-11
CN108985161B CN108985161B (en) 2021-08-03

Family

ID=64540062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810588297.6A Active CN108985161B (en) 2018-06-08 2018-06-08 Low-rank sparse representation image feature learning method based on Laplace regularization

Country Status (1)

Country Link
CN (1) CN108985161B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288002A (en) * 2019-05-29 2019-09-27 江苏大学 A kind of image classification method based on sparse Orthogonal Neural Network
CN110443169A (en) * 2019-07-24 2019-11-12 广东工业大学 A kind of face identification method of edge reserve judgement analysis
CN112597330A (en) * 2020-12-30 2021-04-02 宁波职业技术学院 Image processing method fusing sparsity and low rank
CN113408606A (en) * 2021-06-16 2021-09-17 中国石油大学(华东) Semi-supervised small sample image classification method based on graph collaborative training

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318261A (en) * 2014-11-03 2015-01-28 河南大学 Graph embedding low-rank sparse representation recovery sparse representation face recognition method
CN105574548A (en) * 2015-12-23 2016-05-11 北京化工大学 Hyperspectral data dimensionality-reduction method based on sparse and low-rank representation graph
CN105740912A (en) * 2016-02-03 2016-07-06 苏州大学 Nuclear norm regularization based low-rank image characteristic extraction identification method and system
US20160239980A1 (en) * 2015-02-12 2016-08-18 Mitsubishi Electric Research Laboratories, Inc. Depth-Weighted Group-Wise Principal Component Analysis for Video Foreground/Background Separation
US20170076180A1 (en) * 2015-09-15 2017-03-16 Mitsubishi Electric Research Laboratories, Inc. System and Method for Processing Images using Online Tensor Robust Principal Component Analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318261A (en) * 2014-11-03 2015-01-28 河南大学 Graph embedding low-rank sparse representation recovery sparse representation face recognition method
US20160239980A1 (en) * 2015-02-12 2016-08-18 Mitsubishi Electric Research Laboratories, Inc. Depth-Weighted Group-Wise Principal Component Analysis for Video Foreground/Background Separation
US20170076180A1 (en) * 2015-09-15 2017-03-16 Mitsubishi Electric Research Laboratories, Inc. System and Method for Processing Images using Online Tensor Robust Principal Component Analysis
CN105574548A (en) * 2015-12-23 2016-05-11 北京化工大学 Hyperspectral data dimensionality-reduction method based on sparse and low-rank representation graph
CN105740912A (en) * 2016-02-03 2016-07-06 苏州大学 Nuclear norm regularization based low-rank image characteristic extraction identification method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
魏杰等: "基于视觉特征低维嵌入的细粒度图像分类", 《计算机辅助设计与图形学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288002A (en) * 2019-05-29 2019-09-27 江苏大学 A kind of image classification method based on sparse Orthogonal Neural Network
CN110443169A (en) * 2019-07-24 2019-11-12 广东工业大学 A kind of face identification method of edge reserve judgement analysis
CN110443169B (en) * 2019-07-24 2022-10-21 广东工业大学 Face recognition method based on edge preservation discriminant analysis
CN112597330A (en) * 2020-12-30 2021-04-02 宁波职业技术学院 Image processing method fusing sparsity and low rank
CN113408606A (en) * 2021-06-16 2021-09-17 中国石油大学(华东) Semi-supervised small sample image classification method based on graph collaborative training
CN113408606B (en) * 2021-06-16 2022-07-22 中国石油大学(华东) Semi-supervised small sample image classification method based on graph collaborative training

Also Published As

Publication number Publication date
CN108985161B (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN108388927B (en) Small sample polarization SAR terrain classification method based on deep convolution twin network
CN109344736B (en) Static image crowd counting method based on joint learning
CN108846426B (en) Polarization SAR classification method based on deep bidirectional LSTM twin network
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
Zhao et al. Learning from normalized local and global discriminative information for semi-supervised regression and dimensionality reduction
CN104732208B (en) Video human Activity recognition method based on sparse subspace clustering
CN107133651B (en) The functional magnetic resonance imaging data classification method of subgraph is differentiated based on super-network
CN108985161A (en) A kind of low-rank sparse characterization image feature learning method based on Laplace regularization
CN109145992A (en) Cooperation generates confrontation network and sky composes united hyperspectral image classification method
CN103258210B (en) A kind of high-definition image classification method based on dictionary learning
CN107491734B (en) Semi-supervised polarimetric SAR image classification method based on multi-core fusion and space Wishart LapSVM
CN105260738A (en) Method and system for detecting change of high-resolution remote sensing image based on active learning
CN104239902B (en) Hyperspectral image classification method based on non local similitude and sparse coding
CN104537647A (en) Target detection method and device
CN105023024B (en) A kind of Classifying Method in Remote Sensing Image and system based on regularization set metric learning
CN105989336A (en) Scene identification method based on deconvolution deep network learning with weight
CN111090764A (en) Image classification method and device based on multitask learning and graph convolution neural network
CN109598220A (en) A kind of demographic method based on the polynary multiple dimensioned convolution of input
CN105868711B (en) Sparse low-rank-based human behavior identification method
CN108805061A (en) Hyperspectral image classification method based on local auto-adaptive discriminant analysis
CN105740917B (en) The semi-supervised multiple view feature selection approach of remote sensing images with label study
CN106778714A (en) LDA face identification methods based on nonlinear characteristic and model combination
CN105023239B (en) The high-spectral data dimension reduction method being distributed based on super-pixel and maximum boundary
Zhao et al. Multiple endmembers based unmixing using archetypal analysis
CN102073875A (en) Sparse representation-based background clutter quantification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant