CN101866421B - Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding - Google Patents

Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding Download PDF

Info

Publication number
CN101866421B
CN101866421B CN 201010017290 CN201010017290A CN101866421B CN 101866421 B CN101866421 B CN 101866421B CN 201010017290 CN201010017290 CN 201010017290 CN 201010017290 A CN201010017290 A CN 201010017290A CN 101866421 B CN101866421 B CN 101866421B
Authority
CN
China
Prior art keywords
image
class
dispersion
matrix
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201010017290
Other languages
Chinese (zh)
Other versions
CN101866421A (en
Inventor
尚丽
刘韬
戴桂平
张愉
周燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Vocational University
Original Assignee
Suzhou Vocational University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Vocational University filed Critical Suzhou Vocational University
Priority to CN 201010017290 priority Critical patent/CN101866421B/en
Publication of CN101866421A publication Critical patent/CN101866421A/en
Application granted granted Critical
Publication of CN101866421B publication Critical patent/CN101866421B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting the characteristic of a natural image based on dispersion-constrained non-negative sparse coding, which comprises the following steps of: partitioning an image into blocks, reducing dimensions by means of 2D-PCA, non-negative processing image data, initializing a wavelet characteristic base based on 2D-Gabor, defining the specific value between intra-class dispersion and extra-class dispersion of a sparsity coefficient, training a DCB-NNSC characteristic base, and image identifying based on the DCB-NNSC characteristic base, etc. The method has the advantages of not only being capable of imitating the receptive field characteristic of a V1 region nerve cell of a human eye primary vision system to effectively extract the local characteristic of the image; but also being capable of extracting the characteristic of the image with clearer directionality and edge characteristic compared with a standard non-negative sparse coding arithmetic; leading the intra-class data of the characteristic coefficient to be more closely polymerized together to increase an extra-class distance as much as possible with the least constraint of specific value between the intra-class dispersion and the extra-class dispersion of the sparsity coefficient; and being capable of improving the identification performance in the image identification.

Description

Natural image feature extracting method based on dispersion constrained non-negative sparse coding
Technical field
The present invention relates to the technical field of Digital Image Processing, be particularly related to a kind of image characteristic extracting method based on dispersion constrained non-negative sparse coding (Dispersion Constraint Based Non-negative Sparse Coding, DCB-NNSC) in the digital image processing techniques.
Background technology
Along with the arrival of informationized society, the information that people obtain has not been the information such as numeral, symbol, text that are confined to, but increasing image information.Because the image information great majority have very high dimension, or the amount of images that obtains is huge, and this brings very large inconvenience for the Storage and Processing image information.For real-time system, be difficult to beyond doubt realize.And in most of the cases, can not directly in these measurement spaces, carry out target classification and identification.This is because the dimension of measurement space is very high on the one hand, be not suitable for the design of sorter and recognition methods, the more important thing is the directly essence of reflected measurement object of a kind of like this description, and it changes with the variation of the factors such as camera position, illumination, motion.Therefore in order to carry out the design of sorter and recognition methods, need to transform to the feature space that dimension greatly reduces to image from measurement space, studied image is just represented [to see document: Bian Zhaoqi by one or several proper vector in this feature space, Zhang Xuegong. pattern-recognition (second edition) [M]. Beijing: publishing house of Tsing-Hua University, 1999.].Therefore, Feature Extraction Technology becomes the gordian technique in target classification and the identification, more and more receives scientific research personnel's concern, becomes a focus of pattern identification research.
The higher order statistical correlativity angles of method between data such as the independent component analysis ICA that developed recently gets up, sparse coding SC, Non-negative Matrix Factorization NMF, non-negative sparse coding NNSC, extract the image internal feature, more effectively utilized the essential characteristic of input data on statistical relationship.And the research of nervous physiology shows and [sees document: Olshausen B.A, Field D.J.Emergence of simple-cell receptive field properties by learning a sparse code for natural images[J] .Nature, 1996,381:607-609.], what the visual cortex of primate adopted the expression of natural image is SC strategy, the effectively internal feature of abstract image; Resulting Characteristic Basis Function has locality and directivity in transform domain, responsive to the details of different directions, and its stack coefficient can be realized the sparse coding of image simultaneously.Equally, the spatial filter group corresponding with the SC basis function also is regarded as one group of wave filter with bandpass characteristics, in order to the characteristic of analogsimulation human eye primary visual system V1 district simple cell receptive field.
Although the SC algorithm is the physiological property in simulate human eye primary visual system V1 district to a certain extent, it and not in full conformity with the physiological characteristic of human eye vision.Consider the physiological property that the human eye data are processed, HoyerP.O has proposed non-negative sparse coding algorithm (NNSC) [see document: Hoyer P.O.Non-negative sparse coding[C] .In Neural Networks for Signal Processing XII (Proc.IEEE Workshop on Neural Networks for Signal Processing at first, Martigny, Switzerland, 2002:557-565.), the feature that relative SC algorithm obtains, the feature that this training algorithm obtains is more approached the human eye vision physiological property.Although SC algorithm and non-negative sparse coding algorithm can both be simulated the human eye vision physiological models, can the abstract image feature, they all do not consider the information of image category, the feature of extracting can not be used for pattern-recognition better.And the image characteristic extracting method based on dispersion constrained non-negative sparse coding proposed by the invention has been considered classification constraint prior imformation when optimizing training, not only can effectively extract Characteristic of Image, the feature of extracting simultaneously more is conducive to carry out pattern classification and identification.
Summary of the invention
The objective of the invention is to overcome the deficiency in the prior art NNSC image characteristic extracting method, propose a kind of new image characteristic extracting method based on dispersion constrained non-negative sparse coding (DCB-NNSC).
Know-why of the present invention is: at first utilize the method for stochastic sampling to fragmental image processing (supposing the training image noiseless); In order to reduce the calculated amount of view data, utilize the 2D-PCA algorithm to reduce the dimension of training image; The dimensionality reduction view data is carried out non-negative processing, obtain the input data matrix of DCB-NNSC training algorithm; For accelerating to seek the speed of optimal characteristics base, adopt the 2D-Gabor small echo that the image of dimensionality reduction is carried out feature base initialization process; Then determine in sparse penalty and the sparse coefficient class and dispersion ratio between class; The objective function that consists of according to dispersion minimum ratio constraint in image least error, sparse penalty and the sparse coefficient class and between class (being the classification constraint) again, training obtains DCB-NNSC feature basis matrix; At last, utilize the Radial Basis Probabilistic Neural Networks sorter to realize identification.When considering that training image contains noise, at first carry out denoising Processing, repeat again said process and get final product.
Technical scheme of the present invention is: a kind of image characteristic extracting method based on dispersion constrained non-negative sparse coding comprises following treatment step:
1) construct image data training set comprises fragmental image processing, view data dimension-reduction treatment, the non-negative processing of data, the feature basis matrix A of initialization DCB-NNSC algorithm.
2) determine sparse penalty: adopt contrary Gauss (NIG) density model of normal state of adapting to image characteristic as the sparse Density Distribution of priori, to its negate logarithm as sparse penalty, be f ()=-log[p ()], the NIG model of stochastic variable y is shown below:
p ( y ) = δ · α 2 π · exp ( δ α 2 - β 2 ) · exp [ β ( y - μ ) - α ( y - μ ) 2 + δ 2 ] · [ ( y - μ ) 2 + δ 2 ] - 3 4 - - - ( 1 )
Wherein, the slope of parameter alpha control Density Distribution; Parameter beta is being controlled the density measure of skewness, β<0, and density is deflection left; β>0, density is deflection to the right; β=0 means the Density Distribution about Center Parameter μ symmetry; Parameter δ is a positive scale parameter, and it regulates α and β, so that α → α δ, β → β δ keeps the Density Distribution shape invariance about local scale parameter μ; Estimate that by sample data the front fourth order cumulant that obtains calculates four variable parameters [α, β, μ, δ] T
3) determine in the class of sparse features coefficient and dispersion logarithm ratio constraint between class:
Set characteristic coefficient data acquisition S=[S 1, S 2..., S k..., S C], S wherein kBe k category feature coefficient sample set, k=1,2,3 ..., C, C are the classification number; Each S kBe a vector, S kThe data point number that comprises is n k, the within class scatter matrix S of characteristic coefficient then WAnd scatter matrix S between class BObtain by following formula respectively:
S W = Σ k = 1 C Σ s ∈ S k ( s - m k ) ( s - m k ) T - - - ( 2 )
Wherein,
Figure GSB00000934544500042
m kIt is k class sample average vector; S refers to n in the k category feature coefficient sample set kIndividual data point;
S B = Σ k = 1 C n k ( m k - m ) ( m k - m ) T - - - ( 3 )
Wherein
Figure GSB00000934544500044
M is all planting of all categories sample, and n is total number of all categories sample; In the class and the dispersion ratio between class be shown below:
S W S B = Σ k = 1 C Σ s ∈ S k ( s - m k ) ( s - m k ) T Σ k = 1 C n k ( m k - m ) ( m k - m ) T - - - ( 4 )
Dispersion S in the class WAnd dispersion S between class BWhat reflect is the second-order statistics information of data set, and the two all is a kind of global description to data set.S WLess, aggregation extent is larger in the representation class; S BLarger, scatter greatlyr between representation class, so aggregation is better in the class and in the less representation class of dispersion ratio between class;
When optimizing the minimum target function of DCB-NNSC algorithm, for the ease of derivative operation, we utilize S WAnd S BLogarithm ratio as classification information bound term:
D = ln ( S W S B ) = ln [ Σ k = 1 C Σ s ∈ S k ( s - m k ) ( s - m k ) T Σ k = 1 C n k ( m k - m ) ( m k - m ) T ] - - - ( 5 )
4) based on the modeling of dispersion constrained non-negative sparse coding (DCB-NNSC) objective function:
The error of the feature reconstruction image that consideration utilization is extracted is minimum, the characteristic coefficient adaptive sparse distributes, the interior dispersion S of class WAnd dispersion S between class BMinimum three factors of logarithm ratio consist of the objective function based on dispersion constrained non-negative sparse coding, and are as follows:
J ( s ) = 1 2 | | X - AS | | 2 + λ Σ i f ( s i σ i ) + ln ( S W S B ) - - - ( 6 )
Being constrained to of following formula: X (x, y) 〉=0),
Figure GSB00000934544500052
a i〉=0, s i〉=0, and || s i||=1, wherein,
Figure GSB00000934544500053
Parameter lambda represents positive constant, X=(X 1, X 2..., X 2n) TRepresent pretreated natural image data, its size is 2n * 5LN, and wherein N is the number of natural image, and L is the subimage block number after every width of cloth natural image is cut apart, and 2n is the dimension of subimage block; A=[a 1, a 2..., a i..., a m] be 2n * m dimensional feature basis matrix, wherein a iBe i row column vector among the eigenmatrix A; S=[s 1, s 2..., s i..., s m] TBe m * 5LN dimensional feature matrix of coefficients, s iI every trade vector for S; Sparse penalty f ()=-log[p ()], p () is calculated by formula (1);
Figure GSB00000934544500054
Calculated by (6) formula.
5) update rule of characteristic coefficient S and feature base A:
Adopt gradient algorithm in turn regeneration characteristics coefficient S and feature base A, as fixedly A is constant first, utilize gradient algorithm regeneration characteristics coefficient S; And then fixing S, A is realized upgrading; The iterative process of A is as follows:
▿ J ( a i ) = ∂ J ( A , S ) ∂ a i = - [ X ( x , y ) - Σ i = 1 n a i ( x , y ) s i ] s i T + γ a i - - - ( 7 )
Renewal particularly including characteristic coefficient S:
▿ J ( s i ) = ∂ J ( A , S ) ∂ s i = - a i T [ X ( x , y ) - Σ i = 1 n a i ( x , y ) s i ] + λ σ i f ′ ( s i σ i ) + 2 λ 2 [ ( s i - m k ) S W - ( m k - m ) S B ] - - - ( 8 )
In the following formula σ i = E { s i 2 } ; Sparse penalty f ( s i σ i ) = - log [ p ( s i σ i ) ] ; Calculated by (1) formula
Use above-mentioned A and S more new formula namely can extract the local feature of image collection, the feature base that extracts not only has gem-pure directivity, locality and local edge, but also comprised classification information, use these features, the similar image of preferably reconstruct not only, but also be conducive to carry out image recognition.
6) identification:
After the training and testing image carried out the image pre-service, use respectively the DCB-NNSC algorithm and be optimized study, obtain training characteristics matrix, training sparse coefficient matrix and test feature matrix, test sparse coefficient matrix, then adopt Radial Basis Probabilistic Neural Networks (radial basis probabilistic neural networks, RBFNN) sorter is realized classification, obtains recognition result.
Above-mentioned steps 1) the described fragmental image processing in is: at first choose the identical N width of cloth natural image of image attributes, every width of cloth image is adopted Bidimensional Empirical Mode Decomposition (BEMD), every width of cloth image is divided into six layers according to frequency, remove remaining component, every width of cloth image only utilizes the five layers of component of IMF1~IMF5 after BEMD decomposes; And then every width of cloth IMF image segmentation is become the L number of sub images piece of p * p size, obtain size and be p 2The input picture matrix of * 5LN;
Above-mentioned steps 1) the described image dimension-reduction treatment in is divided into following steps for utilizing 2D-PCA to realize the view data dimensionality reduction:
1. calculate the covariance matrix of sub-image data set: concentrate in available data, at first go average (standardization) to process to the training sub-image data, then ask covariance matrix G, namely
Figure GSB00000934544500061
J=1,2 ..., M, wherein x jBe the training subimage block, M is training subimage block number,
Figure GSB00000934544500062
Be all subimage block averages.
2. calculate the pivot number: establishing U is eigenvectors matrix, and D is the eigenwert diagonal matrix, then G * U=U * D.D eigenmatrix U that larger eigenwert characteristic of correspondence vector consists of before selecting d=[u 1, u 2..., u d], the pivot of calculation training sample, namely
Figure GSB00000934544500063
Above-mentioned steps 1) non-negative being treated to of the described data in: in the sets of image data behind dimensionality reduction, all positive elements form matrix X On, all negative elements of zero-sum rear composition matrix X that takes absolute value OffUtilize X OnAnd X OffConsist of a nonnegative number according to matrix X=(x On: x Off), the size of X is 2n * 5LN.
The feature basis matrix A of the described initialization DCB-NNSC algorithm above-mentioned steps 1) is: utilize 8 directions, the feature basis matrix A of the 2D-Gabor wavelet basis initialization DCB-NNSC algorithm of 8 dimensions in frequency is arranged on each direction.
Above-mentioned steps 6) the RBPNN neural network model described in is four layers of neural network model, comprise an input layer, two hidden layers, an output layer, wherein the first hidden layer mainly is comprised of the center vector (also being referred to as hidden center vector) of each pattern class in the sample space, structurally is equal to the ground floor of RBFNN; The second hidden layer is equal to the hidden layer of PNN, and its node is carried out summation operation; The 3rd layer of second layer that is equal to RBFNN, its output node is the same with RBFNN, all is linear; The 4th layer is output layer.Therefore, the RBPNN models coupling advantage of RBFNN and PNN model, avoided again the shortcoming of the two simultaneously, thereby had higher classification performance.
Advantage of the present invention is to propose a kind of new image characteristic extracting method based on dispersion constrained non-negative sparse coding (DCB-NNSC), the method not only can be simulated the neuronic receptive field characteristic in human eye primary visual system V1 district, effectively extracts the local feature of image; And compare with the non-negative sparse coding algorithm of standard, the characteristics of image of extraction has more clearly directivity and local edge; Utilize simultaneously in the class and class between the least commitment of dispersion ratio, data are condensed together more closely and between class distance is increased as much as possible; Be used for image recognition, this invention can improve recognition performance.
Description of drawings:
Fig. 1 is experiment the general frame (based on the image characteristic extracting method process flow diagram of dispersion constrained non-negative sparse coding (DCB-NNSC)).
Fig. 2 is palmprint image ROI extracted region process, obtains the palmmprint ROI area image that a width of cloth size is 128 * 128 pixels after pre-service is cut apart in process palmmprint location, and this palmprint image is from the palm print database of The Hong Kong Polytechnic University.
Fig. 3 is the BEMD layered image, and a width of cloth size is the first five the IMF component after the palmprint image of 128 * 128 pixels passes through six layers of decomposition of BEMD, and remaining component is ignored; Wherein, (a) ground floor IMF component; (b) ground floor IMF component; (c) ground floor IMF component; (d) ground floor IMF component; (e) ground floor IMF component.
Fig. 4 is 2D-PCA base image, front 40 the principal component base images that obtain behind the 2D-PCA dimensionality reduction.
Fig. 5 utilizes the 2D-Gabor small echo to carry out 8 directions, 64 amplitude collection of illustrative plates that 8 dimensions in frequency obtain after decomposing.
The image that Fig. 6 utilizes the reconstruct of 64 amplitude collection of illustrative plates to obtain.
Fig. 7 is β=μ=0, δ=1, and α gets the NIG sparse distribution figure of logarithm NIG density corresponding to different value.
Fig. 8 is α=7, μ=0, and δ=1, β gets the NIG sparse distribution figure of logarithm NIG density corresponding to different values.
Fig. 9 is based on the feature base of the palmprint image training set of DCB-NNSC, wherein: (a) be the feature base image of ON-channel, (b) being the feature base image of OFF-channel, (c) is the feature base offset images after ON-channel and OFF-channel subtract each other.
Figure 10 is true coupling and the matching value distribution plan of assuming another's name to mate.
Embodiment
(http://www.comp.polyu.edu.hk/~biometrics) is the frequently-used data storehouse for the identification of research palmmprint to the palmprint image database that The Hong Kong Polytechnic University (Hong Kong Polytechnic University-PolyU) provides.This palm print database derives from the crowd of different sexes and age bracket (below 30 years old, 30-50 is between year, more than 50 years old).This database has 386 people totally 7752 width of cloth images, and the size of each width of cloth image is 384 * 284 pixels (75dpi).We use 600 width of cloth palmprint images (being that everyone has 6 width of cloth palmprint images) of 100 people in this database as experimental image, select everyone first three width of cloth palmprint image as training image, and rear three width of cloth palmprint images are as test pattern.Training image is respectively under different illumination conditions, uses different image acquisition instruments to obtain with test pattern, and the sampling time interval between them is about 2 months.
Fig. 1 is the experimental technique overall framework, and the present invention can be divided into the following step:
Step 1. construct image data training set and test set.
Select front 100 people's 600 width of cloth images in the PolyU palm print database, select everyone first three width of cloth image as training set (the column vector size is 300), rear three width of cloth images are as test pattern (the column vector size is 300).The location dividing method that adopts people such as magnifying roc to propose to every width of cloth palmprint image extracts the ROI zone of palmprint image, obtains size and be 128 * 128 subimage, as shown in Figure 2; Each width of cloth imagery exploitation BEMD method is resolved into 5 tomographic images successively from the high frequency to the low frequency, do not consider the residual volume of every layer of decomposition, the image that the first five the IMF component that uses the BEMD decomposition to obtain consists of is as the subimage of every width of cloth image, as shown in Figure 3.
Then, the image of every 128 * 128 sizes is adopted the method for stochastic sampling, cutting at random 200 block sizes is the video in window piece of 8 * 8 pixels, each subimage block is deposited by row, obtain the training set of 64 dimensions, its size is 64 * (200 * 15 * 100)=64 * 30,0000.
Further, in order to improve computing velocity, adopt the 2D-PCA dimension reduction method, make the dimension of training set reduce to 40 dimensions, namely use 40 principal components as the training data set of DCB-NNSC algorithm, be designated as X 040 principal component base images of one width of cloth palmprint image as shown in Figure 4.Simultaneously, to the data acquisition X behind the dimensionality reduction 0Carry out non-negative processing, obtain non-negative training set X Train, its size is 80 * 30,0000.
Adopt above-mentioned same disposal route that the palmmprint test pattern is processed, obtaining size is 80 * 30,0000 non-negative testing set X Test
In order to improve the speed of seeking optimal base, to data set X 0Utilize the 2D-Gabor small echo to carry out feature extraction.Because do not need to obtain optimum 2D-Gabor wavelet character base, therefore select 8 directions, the feature basis matrix A of the 2D-Gabor wavelet basis initialization DCB-NNSC algorithm of 8 dimensions in frequency arranged on each direction 0, its size is 40 * 64, and to A 0Carry out non-negative processing, obtaining size is the non-negative eigenmatrix of 80 * 64 pixels, and this matrix is namely as the initialization feature basis matrix of DCB-NNSC algorithm.One width of cloth palmprint image utilizes the 2D-Gabor small echo to carry out 8 directions, and 64 amplitude collection of illustrative plates that obtain after 8 dimensions in frequency are decomposed utilize image that this 64 amplitude collection of illustrative plates reconstruct obtains as shown in Figure 6 as shown in Figure 5.
Step 2. is determined sparse penalty.
To characteristic coefficient vector S i(i=1,2 ..., 64) and adopt contrary Gauss (NIG) density model of normal state of adapting to image characteristic to estimate the sparse Density Distribution p (s of its priori i), to p (s i) ask its negative logarithm, namely obtain the sparse penalty of characteristic coefficient, i.e. f (s iLog[p (the s of)=- i)], p (s wherein i) calculate (with reference to formula (1)) by following formula:
p ( s i ) = δ · α 2 π · exp ( δ α 2 - β 2 ) · exp [ β ( y - μ ) - α ( s i - μ ) 2 + δ 2 ] · [ ( s i - μ ) 2 + δ 2 ] - 3 4
Four parameters in the following formula [α, β, μ, δ] TFourth order cumulant C before needing to utilize (1), C (2), C (3)And C (4)Calculate auxiliary parameter r 3With normalized kurtosis k 4, be respectively calculated as follows:
r 3 = C ( 3 ) / [ C ( 2 ) ] 3 / 2 k 4 = C ( 4 ) / [ C ( 2 ) ] 2 C 2 = m y ( 2 ) = σ y 2 C 3 = m y ( 3 ) = E { y 3 } C 4 = m y ( 4 ) - 3 ( m y ( 2 ) ) 2 - - - ( 9 )
Then have:
ζ = 3 ( k 4 - 4 3 r 3 2 ) - 1 ρ = r 3 3 ζ - - - ( 10 )
Four parameters [α, β, μ, δ] TBe calculated as follows:
δ = C ( 2 ) ζ ( 1 - ρ 2 ) α = ζ δ 1 - ρ 2 β = αρ μ = C ( 1 ) - ρ C ( 2 ) ζ - - - ( 11 )
Use above-mentioned model to its sparse Density Distribution of a vector calculation of the initialization feature matrix of coefficients of random generation, the sparse distribution figure when choosing different parameters as shown in Figure 7 and Figure 8.
Step 3. is determined in the class of sparse features coefficient and dispersion logarithm ratio constraint between class.
Within class scatter matrix S WAnd scatter matrix S between class BLogarithm ratio can be by formula D = ln ( S W / S B ) = ln [ Σ k = 1 C Σ s ∈ S k ( s - m k ) ( s - m k ) T ] / Σ k = 1 C n k ( m k - m ) ( m k - m ) T Calculate.Wherein C is the classification number of palmprint image, in the palmprint image database that we use, and C=100; n kK class palmmprint sample S k(k=1,2,3 ..., the data point number that C) comprises, here n k=6 * 128 2=16384; S kRepresent k class palmmprint sample, the span of k is k=1,2,3 ..., 100.
Figure GSB00000934544500112
m kBe k class sample average vector, s is k class sample S kIn data point;
Figure GSB00000934544500113
For for all categories sample all plant vector, n is the total number of palmmprint sample of all categories, here n=600 * 128 2=98304, i=1,2 ..., 98304.
Step 4. is based on the modeling of dispersion constrained non-negative sparse coding (DCB-NNSC) objective function.
The dispersion logarithm ratio constraint that the sparse penalty of determining according to step 2 and step 3 are determined, the 2-norm of combining image reconstruct least error again, can set up the minimum target majorized function of DCB-NNSC algorithm: J ( s ) = 1 2 | | X - AS | | 2 2 + λ Σ i f ( s i σ i ) + ln ( S W S B ) . Non-negative training data matrix X=X wherein Train, size is 80 * 75000, is obtained by step 1 in the embodiment; The size of non-negative feature basis matrix A is 80 * 64, is obtained by step 1 in the embodiment; The size 64 * 75000 of non-negative characteristic coefficient matrix S, the random generation.Positive parameter lambda=0.5;
Step 5. regeneration characteristics coefficient S and feature base A obtain feature base image.
According to the gradient of objective function to feature bases
Figure GSB00000934544500115
(referring to formula (7)) obtain the more new formula of feature basis matrix:
a i ( t + 1 ) = a i t + [ X ( x , y ) - Σ i = 1 n a i t ( x , y ) s i ] s i T - γ a i t - - - ( 12 )
Wherein t is iterations.In like manner, according to the gradient of objective function to feature bases
Figure GSB00000934544500117
(referring to formula (8)) obtain the more new formula of characteristic coefficient:
s i t + 1 = s i t - a i T [ X ( x , y ) - Σ i = 1 n a i ( x , y ) s i t ] + λ σ i f ′ ( s i t σ i ) + 2 [ ( s i t - m k ) S W - ( m k - m ) S B ] - - - ( 13 )
Initialized non-negative feature base A is determined by the 2D-Gabor wavelet basis, and characteristic coefficient is the random nonnegative matrix that produces.Adopt in turn update mode learning characteristic coefficient S and feature base A.First fixing a iConstant, utilize formula (13) regeneration characteristics coefficient s iAnd then fixing s iConstant, utilize formula (12) regeneration characteristics base vector a i σ i = E { s i 2 } ; Sparse penalty f ( s i σ i ) = - log [ p ( s i σ i ) ] ; It is 2% that the Image Reconstruction least error is set, if condition satisfies then the finishing iteration process.(100 classes, first three width of cloth image of every class is as the training set to palmprint image training set.The formation of training set is referring to step 1) use above-mentioned learning rules to carry out the DCB-NNSC training, the feature bases that obtains is as shown in Figure 8.Wherein (a) is the feature base image of ON-Channel, and the feature basis matrix is designated as A On(b) be the feature base image of OFF-channel, the feature basis matrix is designated as A Off(c) be ON-channel and OFF-channel feature base offset images, feature basis matrix A=A On-A OffThis feature base offset images namely is to utilize the DCB-NNSC algorithm palmprint image database to be trained the feature base image that obtains, bright Regional Representative's positive pixel value wherein, dark Regional Representative's negative pixel value, gray area representative zero pixel value.Can find out that feature bases has clearly directivity, locality.Utilize this feature base image can carry out the processing such as Image Reconstruction, denoising and identification.
Step 6. identification
After obtaining feature basis matrix A by step 5 training, ask this inverse of a matrix battle array (perhaps pseudo-inverse matrix), W=A-1.After using the 2D-PCA method to carry out pretreatment, we directly carry out the DCB-NNSC algorithm in 128 * 128 image pixel spaces, but carry out the DCB-NNSC algorithm at front d principal component coefficient of palmprint image, the principal component that obtains after the 2D-PCA conversion is U d, U dSize is 64 * 40.Then X is gathered in the palmmprint training TrainCharacteristic coefficient be:
B train = W R k T = W ( X train U d ) T - - - ( 14 )
To test pattern X Test, its characteristic coefficient is:
B test = W R test T = W ( X test U d ) T - - - ( 15 )
Use sorter namely can obtain recognition result to above-mentioned characteristic coefficient matrix and test matrix of coefficients.For example when using Euclidean Distance sorter, when the feature base that utilizes the DCB-NNSC algorithm to extract carried out palmmprint identification, its accuracy of identification was 97.18%.
In order to prove that the DCB-NNSC algorithm is in the validity aspect the image characteristics extraction, use identical sorter, palmmprint training set and test set, we have also carried out the identification experiment based on the palmprint image feature extracting method of the NNSC of PCA, FastICA, Hoyer ' s.Table 1 is under minimum distance classifier, BP sorter, RBPNN sorter, based on the palmmprint recognition performance of different feature extracting method acquisitions.As can be seen from the table, different feature extracting methods is when using identical sorter to identify, the recognition performance based on the feature extracting method of DCB-NNSC that the present invention proposes is best, next is NNSC, FastICA (the use characteristic coefficient is pattern independently), and the PCA recognition performance is the poorest; When the palm print characteristics that different feature extracting methods is obtained was identified checking, the effect of RBPNN sorter was better in three kinds of sorters, and BP takes second place, the minimum distance classifier poor-performing.
The different palmmprint recognition performance of feature extracting method under different sorters of table 1 compares (principal component D=40)
Figure GSB00000934544500132
Simultaneously, in order to further specify the present invention in the validity aspect the image characteristics extraction, when carrying out palmmprint identification experiment, we have also used in identification system two important statistic property indexs commonly used " false rejection rate (False Rejection Rrate; FRR) " and " false acceptance rate (False Acceptance Rate; FAR) " further to verify the efficient of DCB-NNSC characteristic recognition method, when the two is equal, be called equal error rate (Equation Error Rate, EER).If palmmprint test pattern and training image are from same palm, the coupling between them is called true coupling (Genuine Matching); If palmmprint test pattern and training image are from different palms, the coupling between them just is called the coupling (Imposter Matching) of assuming another's name so.The result that coupling produces is matching value, and its scope is between [0,1].If matching value has surpassed given threshold value, think that then checking passes through, otherwise be rejected.Fig. 9 is true coupling and the matching value distribution plan of assuming another's name to mate.
The computing method of FAR and FRR are very simple.Make IMS (Imposter Matching Score) the mark matching degree of assuming another's name; The total number that NIA (Number of Imposter Accesses) mark assumes another's name to accept; GMS (Genuine Matching Score) the true matching degree of mark; The true total number of accepting of NGA (Number of Genuine Accesses) mark, then the computing method of FAR and FRR are as follows:
FAR = IMS NIA × 100 % - - - ( 16 )
FRR = GMS NGA × 100 % - - - ( 17 )
Adopt FAR and the FRR value of the inventive method when different threshold value as shown in table 2.Can find out, when FAR 4.5 * 10 -5The time, FRR is about 1.67%, and when threshold value was 0.620, EER was about 0.18%.
Table 3 has showed that the palm print characteristics recognition result of the inventive method and different characteristic recognition methods compares.When FAR 4.5 * 10 -5The time, the FRR of PCA is that 18.26%, EER is 0.982%; The FRR of FastICA (situation during characteristic coefficient independence) is that 14.34%, EER is 0.876%; The FRR of standard sparse coding (Standard SC) is that 5.864%, EER is 0.632%; The FRR of the NNSC algorithm of Hoyer is that 3.562%, EER is 0.587; And be 1.673% based on the FRR of DCB-NNSC, EER is about 0.17%.Obviously, the present invention is based on the method for DCB-NNSC all better than the above-mentioned several method of mentioning.
The FAR of table 2 characteristic recognition method of the present invention and FRR value
Threshold value FAR(%) FRR(%)
0.445 9.472 0
0.577 8.826 0.028
0.578 8.127 0.037
0.590 3.232 0.071
0.600 1.462 0.094
0.605 0.815 0.106
0.612 0.312 0.147
0.615 0.256 0.162
0.620 0.183 0.179
0.635 0.015 0.396
0.640 3.5×10 -3 0.767
0.650 4×10 -4 0.985
0.660 4.5×10 -5 1.673
0.670 0 1.974
FAR and the FRR value of table 3 algorithms of different under same threshold
Figure GSB00000934544500151

Claims (6)

1. image characteristic extracting method based on dispersion constrained non-negative sparse coding is characterized in that comprising following treatment step:
1) construct image data training set comprises fragmental image processing, view data dimension-reduction treatment, the non-negative processing of data, and initialization is based on the feature basis matrix A of the non-negative sparse coding DCB-NNSC algorithm of dispersion constraint;
2) determine sparse penalty: adopt the contrary Gauss NIG density model of normal state of adapting to image characteristic as the sparse Density Distribution of priori, to its negate logarithm as sparse penalty, be f ()=-log[p ()], the NIG model of stochastic variable y is shown below:
Wherein, the slope of parameter alpha control Density Distribution; Parameter beta is being controlled the density measure of skewness, β<0, and density is deflection left; β>0, density is deflection to the right; β=0 means the Density Distribution about Center Parameter μ symmetry; Parameter δ is a positive scale parameter, and it regulates α and β, so that α → α δ, β → β δ keeps the Density Distribution shape invariance about local scale parameter μ; Estimate that by sample data the front fourth order cumulant that obtains calculates four variable parameters [α, β, μ, δ] T
3) determine in the class of sparse features coefficient and dispersion logarithm ratio constraint between class:
Set characteristic coefficient data acquisition S=[S 1, S 2, S 3..., S k..., S C], S wherein 1, S 2... S kBe k category feature coefficient sample set, k=1,2,3 ..., C, C are the classification number; Each S kBe a matrix, S kThe data point number that comprises is n k, the within class scatter matrix S of characteristic coefficient then WAnd scatter matrix S between class BObtain by following formula respectively:
Figure FSB00000934544400012
Wherein,
Figure FSB00000934544400013
m kIt is k class sample average vector; S refers to n in the k category feature coefficient sample set kIndividual data point;
Figure FSB00000934544400021
Wherein M is all planting of all categories sample, and n is total number of all categories sample;
In the class and the dispersion ratio between class be shown below:
Figure FSB00000934544400023
Dispersion S in the class WAnd dispersion S between class BWhat reflect is the second-order statistics information of data set, and the two all is a kind of global description to data set, S WLess, aggregation extent is larger in the representation class; S BLarger, scatter greatlyr between representation class, so aggregation is better in the class and in the less representation class of dispersion ratio between class;
When optimizing the minimum target function of DCB-NNSC algorithm, for the ease of derivative operation, we utilize S WAnd S BLogarithm ratio as classification information bound term:
Figure FSB00000934544400024
4) based on the modeling of dispersion constrained non-negative sparse coding DCB-NNSC objective function:
The error of the feature reconstruction image that consideration utilization is extracted is minimum, the characteristic coefficient adaptive sparse distributes, the interior dispersion S of class WAnd dispersion S between class BMinimum three factors of logarithm ratio consist of the objective function based on dispersion constrained non-negative sparse coding, and are as follows:
Figure FSB00000934544400025
Being constrained to of following formula: X (x, y) 〉=0),
Figure FSB00000934544400026
a i〉=0, s i〉=0, and || s i||=1, wherein,
Figure FSB00000934544400027
Parameter lambda represents positive constant, X=(X 1, X 2..., X 2n) TRepresent pretreated natural image data, its size is 2n * 5LN, and wherein N is the number of natural image, and L is the subimage block number after every width of cloth natural image is cut apart, and 2n is the dimension of subimage block; A=[a 1, a 2..., a i..., a m] be 2n * m dimensional feature basis matrix, wherein a iBe i row column vector among the eigenmatrix A; S=[s 1, s 2..., s i..., s m] TBe m * 5LN dimensional feature matrix of coefficients, s iI every trade vector for S; Sparse penalty f ()=-log[p ()], p () is calculated by formula (1);
Calculated by (6) formula;
5) update rule of characteristic coefficient S and feature base A:
Adopt gradient algorithm in turn regeneration characteristics coefficient S and feature base A, as fixedly A is constant first, utilize gradient algorithm regeneration characteristics coefficient S; And then fixing S, A is realized upgrading; The iterative process of A is as follows:
Figure FSB00000934544400032
Renewal particularly including characteristic coefficient S:
Figure FSB00000934544400033
In the following formula
Figure FSB00000934544400034
Sparse penalty
Figure FSB00000934544400035
Calculated by (1) formula
Figure FSB00000934544400036
6) identification:
After the training and testing image carried out the image pre-service, use respectively the DCB-NNSC algorithm and be optimized study, obtain training characteristics matrix, training sparse coefficient matrix and test feature matrix, test sparse coefficient matrix, then adopt the classification of Radial Basis Probabilistic Neural Networks RBPNN model realization, obtain recognition result.
2. the image characteristic extracting method based on dispersion constrained non-negative sparse coding as claimed in claim 1, it is characterized in that step 1) in described fragmental image processing be: at first choose the identical N width of cloth natural image of image attributes, every width of cloth image is adopted BEMD Bidimensional Empirical Mode Decomposition method, every width of cloth image is divided into six layers according to frequency, remove remaining component, every width of cloth image only utilizes the five layers of component of IMF1~IMF5 after BEMD decomposes; And then every width of cloth IMF image segmentation is become the L number of sub images piece of p * p size, obtain size and be p 2The input picture matrix of * 5 * LN.
3. the image characteristic extracting method based on dispersion constrained non-negative sparse coding as claimed in claim 1 is characterized in that step 1) described in the image dimension-reduction treatment for utilizing 2D-PCA to realize the view data dimensionality reduction, be divided into following steps:
1. calculate the covariance matrix of sub-image data set: concentrate in available data, at first the training sub-image data is removed average value processing, then ask covariance matrix G, namely
Figure FSB00000934544400041
J=1,2 ..., M, wherein x jBe the training subimage block, M is training subimage block number,
Figure FSB00000934544400042
Be all subimage block averages;
2. calculate the pivot number: establishing U is eigenvectors matrix, and D is the eigenwert diagonal matrix, and then G * U=U * D selects front d the eigenmatrix U that larger eigenwert characteristic of correspondence vector consists of d=[u 1, u 2..., u d], the pivot of calculation training sample, namely
4. the image characteristic extracting method based on dispersion constrained non-negative sparse coding as claimed in claim 1 is characterized in that step 1) in non-negative being treated to of described data: in the sets of image data behind dimensionality reduction, all positive elements form matrix X On, all negative elements of zero-sum rear composition matrix X that takes absolute value OffUtilize x OnAnd x OffConsist of a nonnegative number according to matrix X=(x On: x Off), the size of X is 2n * 5LN.
5. the image characteristic extracting method based on dispersion constrained non-negative sparse coding as claimed in claim 1, it is characterized in that step 1) in the feature basis matrix A of described initialization DCB-NNSC algorithm be: utilize 8 directions, the feature basis matrix A of the 2D-Gabor wavelet basis initialization DCB-NNSC algorithm of 8 dimensions in frequency arranged on each direction.
6. the image characteristic extracting method based on dispersion constrained non-negative sparse coding as claimed in claim 1, it is characterized in that step 6) described in the RBPNN neural network model be four layers of neural network model, comprise an input layer, two hidden layers, an output layer, wherein the first hidden layer mainly is comprised of the center vector of each pattern class in the sample space, structurally is equal to the ground floor of RBFNN; The second hidden layer is equal to the hidden layer of PNN, and its node is carried out summation operation; The 3rd layer of second layer that is equal to RBFNN, its output node is the same with RBFNN, all is linear; The 4th layer is output layer.
CN 201010017290 2010-01-08 2010-01-08 Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding Expired - Fee Related CN101866421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010017290 CN101866421B (en) 2010-01-08 2010-01-08 Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010017290 CN101866421B (en) 2010-01-08 2010-01-08 Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding

Publications (2)

Publication Number Publication Date
CN101866421A CN101866421A (en) 2010-10-20
CN101866421B true CN101866421B (en) 2013-05-01

Family

ID=42958142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010017290 Expired - Fee Related CN101866421B (en) 2010-01-08 2010-01-08 Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding

Country Status (1)

Country Link
CN (1) CN101866421B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968635B (en) * 2012-11-23 2015-05-20 清华大学 Image visual characteristic extraction method based on sparse coding
CN103489009B (en) * 2013-09-17 2016-08-17 北方信息控制集团有限公司 Mode identification method based on adaptive correction neutral net
CN103544683B (en) * 2013-10-12 2016-04-20 南京理工大学 A kind of night vision image of view-based access control model cortex highlights contour extraction method
CN103617637B (en) * 2013-12-16 2014-12-10 中国人民解放军国防科学技术大学 Dictionary learning-based low-illumination motion detection method
CN103679662B (en) * 2013-12-25 2016-05-25 苏州市职业大学 Based on the right super-resolution image restoration method of classification priori non-negative sparse coding dictionary
CN103942526B (en) * 2014-01-17 2017-02-08 山东省科学院情报研究所 Linear feature extraction method for discrete data point set
US9058517B1 (en) * 2014-04-16 2015-06-16 I.R.I.S. Pattern recognition system and method using Gabor functions
CN106156775B (en) * 2015-03-31 2020-04-03 日本电气株式会社 Video-based human body feature extraction method, human body identification method and device
US10091506B2 (en) * 2015-06-11 2018-10-02 Sony Corporation Data-charge phase data compression architecture
CN105069741B (en) * 2015-09-07 2018-01-30 值得看云技术有限公司 The non-negative hidden feature deriving means of one kind damage image and method
CN105224944B (en) * 2015-09-08 2018-10-30 西安交通大学 Image characteristic extracting method based on the sparse non-negative sparse coding of code book block
CN105335732B (en) * 2015-11-17 2018-08-21 西安电子科技大学 Based on piecemeal and differentiate that Non-negative Matrix Factorization blocks face identification method
CN105718883B (en) * 2016-01-19 2019-01-15 中国人民解放军国防科技大学 Image visual attribute mining method based on sparse factor analysis
CN106897740A (en) * 2017-02-17 2017-06-27 重庆邮电大学 EEMD DFA feature extracting methods under Human bodys' response system based on inertial sensor
CN107316065B (en) * 2017-06-26 2021-03-02 刘艳 Sparse feature extraction and classification method based on fractional subspace model
CN110348428B (en) * 2017-11-01 2023-03-24 腾讯科技(深圳)有限公司 Fundus image classification method and device and computer-readable storage medium
CN107782748B (en) * 2017-11-20 2023-12-19 福建技术师范学院 Microwave thermal imaging nondestructive detection system and detection method based on matrix decomposition
CN108830806B (en) * 2018-05-29 2020-12-18 河南科技大学 Sensitivity of receptive field model and dynamic regulation and control method of model parameters
CN108962229B (en) * 2018-07-26 2020-11-13 汕头大学 Single-channel and unsupervised target speaker voice extraction method
CN111274855B (en) * 2018-12-05 2024-03-26 北京猎户星空科技有限公司 Image processing method, image processing device, machine learning model training method and machine learning model training device
CN109406148B (en) * 2018-12-11 2020-06-05 中原工学院 Rolling bearing fault feature extraction method based on improved quantum evolution algorithm
CN111461323B (en) * 2020-03-13 2022-07-29 中国科学技术大学 Image identification method and device
CN113239741A (en) * 2021-04-23 2021-08-10 中国计量大学 Face recognition method based on memory bank non-negative matrix factorization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510254A (en) * 2009-03-25 2009-08-19 北京中星微电子有限公司 Method for updating gender classifier in image analysis and the gender classifier
CN101515285A (en) * 2009-04-03 2009-08-26 东南大学 Image retrieval and filter apparatus based on image wavelet feature and method thereof
CN101594314A (en) * 2008-05-30 2009-12-02 电子科技大学 A kind of spam image-recognizing method and device based on high-order autocorrelation characteristic

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101594314A (en) * 2008-05-30 2009-12-02 电子科技大学 A kind of spam image-recognizing method and device based on high-order autocorrelation characteristic
CN101510254A (en) * 2009-03-25 2009-08-19 北京中星微电子有限公司 Method for updating gender classifier in image analysis and the gender classifier
CN101515285A (en) * 2009-04-03 2009-08-26 东南大学 Image retrieval and filter apparatus based on image wavelet feature and method thereof

Also Published As

Publication number Publication date
CN101866421A (en) 2010-10-20

Similar Documents

Publication Publication Date Title
CN101866421B (en) Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding
CN101419671B (en) Face gender identification method based on fuzzy support vector machine
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN106934359A (en) Various visual angles gait recognition method and system based on high order tensor sub-space learning
CN104915676A (en) Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method
CN103927531A (en) Human face recognition method based on local binary value and PSO BP neural network
CN101847210A (en) Multi-group image classification method based on two-dimensional empirical modal decomposition and wavelet denoising
CN104732244A (en) Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method
CN101916369B (en) Face recognition method based on kernel nearest subspace
CN103164689A (en) Face recognition method and face recognition system
Sheetlani et al. Fingerprint based automatic human gender identification
CN103336942A (en) Traditional Chinese painting identification method based on Radon BEMD (bidimensional empirical mode decomposition) transformation
Shekar et al. Grid structured morphological pattern spectrum for off-line signature verification
CN106650766A (en) Inherent feature analysis based three-dimensional body waveform classification method
Khalifa et al. Wavelet, gabor filters and co-occurrence matrix for palmprint verification
CN112132104B (en) ISAR ship target image domain enhancement identification method based on loop generation countermeasure network
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
CN105930788A (en) Non-downsampling contour wave and PCA (principal component analysis) combining human face recognition method
Tallapragada et al. Iris recognition based on combined feature of GLCM and wavelet transform
CN103345739B (en) A kind of high-resolution remote sensing image building area index calculation method based on texture
CN102521603B (en) Method for classifying hyperspectral images based on conditional random field
Jun-bin et al. Eyebrows identity authentication based on wavelet transform and support vector machines
Thamizharasi Performance analysis of face recognition by combining multiscale techniques and homomorphic filter using fuzzy K nearest neighbour classifier
CN103093184A (en) Face identification method of two-dimensional principal component analysis based on column vector
Pokhriyal et al. MERIT: Minutiae Extraction using Rotation Invariant Thinning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Free format text: FORMER OWNER: SHANG LI

Effective date: 20140106

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20140106

Address after: 215104 International Education Park, 106 Da Neng Road, Jiangsu, Suzhou

Patentee after: Suzhou vocational University

Address before: 215104 International Education Park, 106 Da Neng Road, Jiangsu, Suzhou

Patentee before: Suzhou vocational University

Patentee before: Shang Li

C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130501

Termination date: 20140108