CN100421127C - Pattern characteristic extraction method and device for the same - Google Patents

Pattern characteristic extraction method and device for the same Download PDF

Info

Publication number
CN100421127C
CN100421127C CNB038090325A CN03809032A CN100421127C CN 100421127 C CN100421127 C CN 100421127C CN B038090325 A CNB038090325 A CN B038090325A CN 03809032 A CN03809032 A CN 03809032A CN 100421127 C CN100421127 C CN 100421127C
Authority
CN
China
Prior art keywords
vector
matrix
input
image
discriminant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB038090325A
Other languages
Chinese (zh)
Other versions
CN1802666A (en
Inventor
龟井俊男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Publication of CN1802666A publication Critical patent/CN1802666A/en
Application granted granted Critical
Publication of CN100421127C publication Critical patent/CN100421127C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Abstract

An input pattern feature amount is decomposed into element vectors. For each of the feature vectors, a discriminant matrix obtained by discriminant analysis is prepared in advance. Each of the feature vectors is projected into a discriminant space defined by the discriminant matrix and the dimensions are compressed. According to the feature vector obtained, projection is performed again by the discriminant matrix to calculate the feature vector, thereby suppressing reduction of the feature amount effective for the discrimination and performing effective feature extraction.

Description

Pattern feature extraction method and be used to carry out the equipment of this method
Background technology
Current at area of pattern recognition, by from input pattern, extracting eigenvector, from eigenvector, extracting the eigenvector be effective to discern and the eigenvector that obtains from each pattern is compared, can measure such as the similarity between character or the people's face isotype.
For example, under the situation of people's face checking, normalization pixel value is afterwards carried out to facial image in the position of use human eye etc., through behind the raster scanning pixel value is transformed into the one-dimensional characteristic vector, and by using this eigenvector to carry out principal component analysis (PCA) (people's such as non-references 1:Moghaddam " Probabilistic Visual Learning for ObjectRepresentation " (visual exercise at random that is used for object representation) as the input feature vector vector, IEEE pattern analysis and machine intelligence journal, Vol.19, No.7, pp.696-710,1997), perhaps the major component of eigenvector is carried out linear discriminate analysis (" Discriminant Analysis of Principal Components for Face Recognition " (being used for the major component discriminant analysis of recognition of face) of people such as non-references 2:W.Zhao, IEEE people's face and gesture be the 3rd international conference paper of identification automatically, pp.336-341,1998), thus reduced dimension and by using that the eigenvector obtained carries out based on the authentication of people's face etc.
In these methods, calculated in covariance matrix, the class covariance matrix between covariance matrix and class according to the training sample of being prepared, and in covariance matrix, obtained basic vector and be used as separating of eigenvalue problem.Then by using these basic vectors to change the feature of input feature vector vector.
Come to tell about in detail linear discriminate analysis below.
Linear discriminate analysis is a kind of method that obtains transformation matrix W, and this matrix can maximize M n dimensional vector n the y (=W that is obtained when using transformation matrix W that N dimensional feature vector x is carried out conversion TCovariance matrix S between class x) BWith covariance matrix S in the class WThe ratio.Expression formula as the equation (1) of this covariance estimation function is defined as follows:
J ( W ) = | S B | | S W | = | W T Σ B W | | W T Σ W W| - - - ( 1 )
In this equation, covariance matrix ∑ B is respectively C class ω among the stack features vector x in the training sample between interior covariance matrix ∑ W of class and class iCovariance matrix ∑ i (i=1,2 ..., C; Their data counts n i) and these classes between covariance matrix, their expression formula is respectively:
ΣW = Σ i = 1 C P ( ω i ) Σi
= Σ i = 1 C ( P ( ω i ) 1 n i Σ x ∈ x i ( x - m i ) ( x - m i ) T ) - - - ( 2 )
ΣB = Σ i = 1 C P ( ω i ) ( m i - m ) ( m i - m ) T - - - ( 3 )
M wherein iBe class ω iMean vector (seeing equation (4)), and m is the mean vector (seeing equation (5)) of all x:
m i = 1 n i Σ x ∈ x i x - - - ( 4 )
m = Σ i = 1 C P ( ω i ) m i - - - ( 5 )
If each class ω iLikelihood probability P (ω i) sample number of reflection in advance is n i, then can fully suppose P (ω i)=n i/ n.All equate if suppose each probability, then can fully suppose P (ω i)=1/C.
Can obtain to make the maximized transformation matrix W of equation (1), and as corresponding to as column vector w iOne group of normalization eigenvector of the big eigenwert of M of equation (6) of eigenvalue problem.The transformation matrix W of Huo Deing is called as the discriminant matrix by this way.
BW i=λ iWW i (6)
Note, existing linear discriminate analysis method people such as for example non-references 5:RichardO.Duda " pattern-recognition " (instructed and translated by Morio Onoue, the Shingijutu communication common carrier, 2001, have in pp.113-122) to disclose.
The dimension of supposing input feature vector vector x is big especially.In this case, if use little training data, then ∑ W becomes unusual.As a result, use general approach can't solve the eigenvalue problem of equation (6).
As described in references 1: the unsettled open 7-296169 of Jap.P., the known high-order composition that has less eigenwert in covariance matrix comprises big parameter estimating error, this accuracy to identification is unfavorable.
According to people's such as W.Zhao above-mentioned article, the input feature vector vector is carried out principal component analysis (PCA), and the major component with big eigenwert is carried out discriminant analysis.More specifically say, as shown in Figure 2, by using the basis matrix that obtains by principal component analysis (PCA) the input feature vector vector to be carried out also extract the eigenvector that is effective to discern after projection extracts major component by using the discriminant matrix that obtains by principal component analysis (PCA) major component to be carried out projection as basis matrix.
According to references 1: the numerical procedure that is used for the eigentransformation matrix described in the unsettled 7-296169 of Jap.P., dimension obtains to reduce by high-order eigenwert and the individual features vector of deleting total covariance matrix ∑ T, and less feature space is carried out discriminant analysis.Deleting the high-order eigenwert of total covariance matrix and individual features vector is equivalent to by principal component analysis (PCA) carries out discriminant analysis in the major component space that only has big eigenwert.In this sense, this technology such as the method for W.Zhao, provides stable parameter estimation by removing the high-order feature.
But, use total covariance matrix ∑ T carry out principal component analysis (PCA) unlike occur big covariance axially on select the orthogonal axes of feature space what to be got well successively.For this reason, lost the feature axis that is effective to pattern-recognition.
Suppose that eigenvector x forms (x=(x by three elements 1, x 2, x 3) T), x wherein 1And x 2Be have big covariance but with the irrelevant feature of pattern-recognition, and x 3Be be effective to pattern-recognition but have less covariance feature (covariance in covariance/class between class, just the Fisher ratio is bigger, but variance yields itself compares x 1And x 2Fisher than all little).In this case, if carried out principal component analysis (PCA) and only selected the bidimensional value, then selected and x 1And x 2Relevant feature space, and ignored the x that is effective to discern 3Contribution.
Tell about this problem below with reference to the accompanying drawings.Suppose Fig. 3 A be from by x 1And x 2The DATA DISTRIBUTION of the direction that the plane of definition is almost vertical, wherein Hei Quan and white circle expression are in inhomogeneous data point.When by x 1And x 2In the time of in the space (plane among Fig. 3 A) of definition, can discern Hei Quan and Bai Quan.But, shown in Fig. 3 B, when from x perpendicular to this plane 3Feature axis the time, can't Jiang Heiquan and the Bai circle be separated from each other.But, if select to have the axle of big covariance, then selected by x 1And x 2The plane of definition is used as feature space, and this is equivalent to reference to figure 3A and distinguishes.Therefore make and be difficult to distinguish.
In the prior art, there is one by principal component analysis (PCA) with in (always) covariance matrix, delete the unavoidable problem of technology in space with less eigenwert.
Summary of the invention
The present invention considers the above-mentioned problems in the prior art, and its target is to propose a kind of eigenvector converter technique, be used for will be when the input pattern eigenvector extracts the eigenvector be effective to discern and will suppress characteristic dimension, can suppress to be effective to discern and carry out the minimizing of the characteristic quantity that validity feature extracts.
Feature according to pattern feature extraction method of the present invention is to comprise: by using a plurality of eigenvector x iCome the expression pattern feature and from one of feature of image; By to a plurality of eigenvector x iEach carry out the discriminant matrix W that linear discriminate analysis obtains each eigenvector iBy using the discriminant matrix W iTo vector x iCarry out linear transformation and obtain vector y i, then by arranging vector y iAnd obtain eigenvector y, again by eigenvector y is carried out linear discriminate analysis and obtains the discriminant matrix W in advance TAnd carry out by the discriminant matrix W iWith the discriminant matrix W TSpecified linear transformation.
The feature of this pattern feature extraction method is that the step of carrying out linear transformation comprises that the eigenvector by pattern conversion comes the compressive features dimension.
In addition, the feature of this method is to express step and comprise: pattern feature is divided into a plurality of eigenvector x iObtain the discriminant matrix W TStep comprise: by using the discriminant matrix W iTo eigenvector x iCarry out linear transformation y i = W i T x i Come calculated characteristics vector y iAnd the step of carrying out linear transformation comprises: by using the discriminant matrix W TThe next eigenvector y that combination is calculated iAnd the vector y that obtains carries out linear transformation z=W T TY and obtain eigenvector z compresses the dimension of the pattern feature that obtained by calculated characteristics vector z.
In addition, the feature of this method is further to comprise: calculated in advance is by the discriminant matrix W iAnd W TSpecified matrix W, the step of wherein carrying out linear transformation comprises: come by combinatorial input eigenvector x by using matrix W iThe eigenvector x that obtains with matrix W carries out linear transformation z=W TX comes calculated characteristics vector z, and the dimension of the pattern feature that obtained by calculated characteristics vector z is compressed.
The feature of above-mentioned pattern feature extraction method is to express step and comprise: extract by a plurality of sample point set S default from image iIn the formed eigenvector x of pixel value that obtains of a plurality of sample points iAnd the step of carrying out linear transformation comprises: by each image pattern set transformation eigenvector is extracted characteristic quantity from image.
The feature of this pattern feature extraction method is to obtain the discriminant matrix W in advance TStep comprise: by using the discriminant matrix W iTo a plurality of eigenvector x that form from a plurality of sample points iCarry out linear transformation y i=W i Tx iCome calculated characteristics vector y iAnd the step of carrying out linear transformation comprises: by using the discriminant matrix W TThe next eigenvector y that combination is calculated iAnd the vector y that obtains calculates linear transformation z=W T TY and obtain eigenvector z to extract characteristic quantity by calculated characteristics vector z from image.
The feature of this method is further to comprise: calculated in advance is by the discriminant matrix W iAnd W TSpecified matrix W, the step of wherein carrying out linear transformation comprises: come assemblage characteristic vector x by using matrix W iThe eigenvector x that obtains with matrix W carries out linear transformation z=W TX and obtain eigenvector z to extract characteristic quantity from image by calculated characteristics vector z.
The feature of above-mentioned pattern feature extraction method is, expresses step and comprises: image segmentation is become a plurality of default regional areas, and for each of a plurality of regional areas, characteristic quantity is expressed as the eigenvector x that is extracted iAnd the step of carrying out linear transformation comprises: the eigenvector by the conversion regional area extracts characteristic quantity from image.
The feature of this pattern feature extraction method is to obtain the discriminant matrix W in advance TStep comprise: by using the discriminant matrix W iCome eigenvector x iCarry out linear transformation y i=W i Tx iAnd calculate eigenvector y iAnd the step of carrying out linear transformation comprises: by using the discriminant matrix W TThe next eigenvector y that combination is calculated iAnd the vector y that obtains calculates linear transformation z=W T TY and obtain eigenvector z to extract characteristic quantity by calculated characteristics vector z from image.
The feature of this method is further to comprise: calculated in advance is by the discriminant matrix W iAnd W TSpecified matrix W, the step of wherein carrying out linear transformation comprises: come combinatorial input eigenvector x by using matrix W iThe eigenvector x that obtains with matrix W carries out linear transformation z=W TX obtains eigenvector z, comes to extract characteristic quantity from image by calculated characteristics vector z.
The feature of above-mentioned pattern feature extraction method is further to comprise: image is carried out two-dimension fourier transform, wherein express step and comprise: real part and the imaginary part of extracting two-dimension fourier transform are used as eigenvector x i, and the power spectrum of calculating two-dimension fourier transform is used as eigenvector x with the extraction power spectrum 2, and in the step of carrying out linear transformation, come from image, to extract characteristic quantity by the transform characteristics vector.
The feature of this pattern feature extraction method is, in the step of carrying out linear transformation, realizing the mode of dimensionality reduction, by corresponding to eigenvector x iThe discriminant matrix W of major component iWith the discriminant matrix W TSpecified linear transformation comes conversion corresponding to the real part of Fourier component and the eigenvector x of imaginary part 1With eigenvector x corresponding to the power spectrum of Fourier component 2Thereby, from image, extract characteristic quantity.
The feature of this pattern feature extraction method is further to comprise by use being used for transform characteristics vector x 1The transformation matrix Ψ of major component 1With by discriminant matrix W corresponding to major component 1Represented basis matrix Φ 1(=(W 1 TΨ 1 T) T), carry out linear transformation y 11 Tx 1,, calculate the eigenvector x that forms by real part and imaginary part based on Fourier transform 1The discriminant feature of major component, with the eigenvector y that obtains 1Size normalize to preliminary dimension, be used for transform characteristics vector x by use 2The transformation matrix Ψ of major component 2With by discriminant matrix W corresponding to major component 2Represented basis matrix Φ 2(=(W 2 TΨ 2 T) T) based on Fourier transform, calculate the eigenvector x that forms from power spectrum 2The discriminant feature of major component, with the eigenvector y that obtains 2Size normalize to preliminary dimension, and by using the discriminant matrix W TWith respect to passing through two eigenvector y of combination 1And y 2The eigenvector y that obtains calculates linear transformation z=W T TY and obtain eigenvector z, z extracts characteristic quantity from image by the calculated characteristics vector.
The feature of this pattern feature extraction method is that the step of expression further comprises: image segmentation is become a plurality of zones, and extracting eigenvector x 2Step in, in each zone of cutting apart, calculate the two-dimension fourier power spectrum.
In addition, the feature of this method is that in segmentation procedure, the zone is divided into the zone with different size in many ways.
In addition, the feature of this method is further to comprise: carry out feature extraction and extract the validity feature amount by the two-dimension fourier power spectrum that is obtained being carried out the kernel discriminant analysis, reduce characteristic dimension.
The feature of this method is to comprise that further the discriminant matrix that use obtains in advance by the two-dimension fourier power spectrum that is obtained is carried out linear discriminate analysis carries out linear transformation, reduces characteristic dimension.
The feature of this method is to obtain the discriminant matrix W in advance iStep comprise: obtain by to eigenvector x iThe major component of (i=1,2) is carried out linear discriminate analysis and the discriminant matrix W of the eigenvector that obtains i, and in the step of carrying out linear transformation, by conversion corresponding to the real part of Fourier component and the eigenvector x of imaginary part 1With eigenvector x corresponding to the power spectrum of Fourier component 2Come from image, to extract characteristic quantity, thereby by by being used for eigenvector x iThe discriminant matrix W of major component iWith the discriminant matrix W TDimensionality reduction is carried out in the linear transformation of appointment.
The feature of this pattern feature extraction method is, expressing step further comprises: the power spectrum that calculates two-dimension fourier transform, image segmentation is become a plurality of zones and the power spectrum of two-dimension fourier transform is calculated in each zone, and extract by making up vector that each power spectrum obtains with as eigenvector x 2
Pattern feature extraction equipment according to the present invention is to be used for it is characterized in that comprising the basis matrix memory storage by using linear transformation to come the pattern feature extraction equipment of the characteristic dimension of suppression mode feature, is used to store the discriminant matrix W of eigenvector iWith the discriminant matrix W TSpecified basis matrix, the discriminant matrix W iBy a plurality of eigenvector x to the expression pattern feature iCarry out linear discriminate analysis and obtain the discriminant matrix W TBy using the discriminant matrix to passing through to vector x iThe vector y that carries out linear transformation and obtain iThe eigenvector y that makes up acquisition carries out linear discriminate analysis and acquisition in advance; And the linear transformation device, thereby be used for eigenvector compressive features dimension by using the basis matrix stored by described basis matrix memory storage to come pattern conversion.
Computer-readable recording medium according to the present invention is wherein to record to be used to make computing machine to come the computer-readable recording medium of execution pattern feature extraction with the program of the characteristic dimension of compact model feature by linear transformation, and described program is characterised in that and comprises and be used to carry out following functional programs: by a plurality of eigenvector x iCome the expression pattern feature, obtain by to each eigenvector x in advance iThe discriminant matrix W of the eigenvector that carries out linear discriminate analysis and obtain i, and by to will be to vector x iCarry out the y that linear transformation obtains iThe eigenvector y that obtains that combines carries out linear discriminate analysis and obtains the discriminant matrix W in advance T, and pass through by the discriminant matrix W iWith the discriminant matrix W TSpecified linear transformation comes the eigenvector of pattern conversion, the compressive features dimension.
Feature according to image characteristic extracting method of the present invention is, comprises by using predetermined mathematic(al) representation to calculate the fourier spectra of input normalized image, obtains the fourier spectra vector; From the fourier modulus of the parts of images of normalized image, extract polylith fourier modulus vector; By using basis matrix that fourier spectra vector and polylith strength vector are carried out the eigenvector projection, obtain the normalized vector of fourier spectra vector and polylith strength vector; Normalized vector is made up, form the Fourier vector of combination, and, obtain the projection vector of coupling value by using second basis matrix; And, extract the Fourier feature by quantizing projection vector.
Description of drawings
Fig. 1 is a block diagram, shows the structure of pattern feature extraction equipment according to an embodiment of the invention;
Fig. 2 shows the explanation to prior art;
Fig. 3 shows the explanation that pattern feature is distributed;
Fig. 4 is a block diagram, shows the structure according to the pattern feature extraction equipment of second embodiment of the invention;
Fig. 5 shows the explanation to the embodiment of the invention;
Fig. 6 shows the explanation to the embodiment of the invention;
Fig. 7 is a block diagram, shows the structure according to the facial image matching system of third embodiment of the invention;
Fig. 8 shows the explanation to the embodiment of the invention;
Fig. 9 shows the explanation to the embodiment of the invention;
Figure 10 shows the explanation to the embodiment of the invention;
Figure 11 shows the explanation to the embodiment of the invention;
Figure 12 shows the explanation to the embodiment of the invention;
Figure 13 shows the explanation to the embodiment of the invention;
Figure 14 shows the explanation to the embodiment of the invention;
Figure 15 shows the example of describing according to people's face of fifth embodiment of the invention;
Figure 16 shows the regular example when using binary representation in fifth embodiment of the invention;
Figure 17 shows how extracting the explanation of the Fourier feature (FourierFeature) in the fifth embodiment of the invention;
Figure 18 shows the example of fourier spectra scan method in fifth embodiment of the invention;
Figure 19 is a table, shows the example of fourier spectra scanning rule in fifth embodiment of the invention;
Figure 20 is a table, shows the example of the scanning area of the fourier space that is used for the CentralFourierFeature element in fifth embodiment of the invention;
Figure 21 shows the example of the block diagram of fifth embodiment of the invention.
Embodiment
(first embodiment)
With reference now to accompanying drawing, tells about embodiments of the invention in detail.Fig. 1 is a block diagram, shows according to pattern feature extraction equipment of the present invention.
Come to tell about in detail this pattern feature extraction equipment below.
As shown in Figure 1, pattern feature extraction equipment according to the present invention comprises the first linear transformation device 11, is used for input feature vector vector x 1Carry out linear transformation; The second linear transformation device 12 is used for input feature vector vector x 2Carry out linear transformation; And trilinear converting means 13, be used to receive eigenvector, and it is carried out linear transformation through linear transformation device 11 and 12 conversion and dimensionality reduction.Each linear transformation device is carried out basic transformation by using the discriminant matrix that obtains and be stored in by training in advance in discriminant matrix stores device 14,15 and 16 based on discriminant analysis.
Input feature vector eigenvector x 1And x 2Be the characteristic quantity that extracts according to purposes such as character recognition, confirming faces, and for example comprise the direction character that goes out from the gradient feature calculation of image and not second to the density feature of image pixel value.Each vector comprises a plurality of elements.In this case, for example, N 1Direction character is as an eigenvector x 1Be transfused to, and other N 2Density value is as eigenvector x 2Be transfused to.
Discriminant matrix stores device 14 and 15 storages are passed through eigenvector x 1And x 2The discriminant matrix W of carrying out linear discriminate analysis and obtaining 1And W 2
As mentioned above, with reference to the proper vector in the training sample of preparing according to their class, can obtain the discriminant matrix by covariance matrix ∑ B (equation (3)) between covariance matrix ∑ W (equation (2)) in the compute classes and class.Each class ω iLikelihood probability P (ω i) can be written as P (ω i)=n i/ n, sample number n iShine upon one by one.
With reference to these covariance matrixes, by the corresponding eigenvector W of eigenwert greatly in the middle eigenvalue problem of representing of selection and equation (6) i, can obtain the discriminant matrix.
As reference feature vector x 1And x 2Select less than input feature vector dimension N 1And N 2M 1Peacekeeping M 2The dimension basic the time, can obtain M by projective transformation for the discriminant base 1Peacekeeping M 2The eigenvector y of dimension 1And y 2
y 1 = w 1 T x 1
y 2 = W 2 T x 2 . . . ( 7 )
In this case, matrix W 1And W 2Size be respectively M 1* N 1And M 2* N 2
By reducing the dimension M of the feature space for the treatment of projection significantly 1And M 2, can reduce the number of feature dimensions effectively.This can reduce data volume effectively and improve processing speed.But if the number of feature dimensions reduces too much, the discriminant performance will worsen.This is because of the minimizing along with the feature dimensions number, has lost the characteristic quantity that is effective to differentiate.
For this reason, the dimension M of eigenvector 1And M 2Be subject to the influence of number of training, and will decide by experiment under the preferable case.
The vector y that trilinear converting means 13 will be calculated by the first and second linear transformation devices 1And y 2, y projects to the discriminant space as the input feature vector vector.Identical with the situation of calculating the first and second discriminant matrixes, store the discriminant matrix W in the discriminant matrix stores device 16 into 3From training sample, obtain.The arrangement of elements of input feature vector vector y is shown in following equation (8):
y = y 1 y 2 . . . ( 8 )
Identical with the situation of equation (7), by using basis matrix W 3(size of matrix is L * (M 1+ M 2)), come projection L dimensional feature vector y according to equation (9), and obtain eigenvector z to be exported.
Z = W 3 T y . . . ( 9 )
Like this, each eigenvector all obtains to decompose, and the training sample of the less eigenvector of dimension is carried out linear discriminate analysis, thereby is suppressed at the normal evaluated error that takes place in the high dimensional feature vector, and obtains the feature that is effective to differentiate.
In these cases, three linear transformation devices that provided are carried out simultaneously and step by step and are handled.But, because therefore the linear discriminent device switches discriminant matrix to be read by the input feature vector vector according to pending linear transformation substantially by amassing-realizing with computing unit, can a shared linear transformation device.
Like this, by using a linear transformation device, just can reduce the size of essential computing unit.
From equation (7), (8) and (9) as can be seen, the calculating of output characteristic vector z can be expressed as:
Z = W 3 T y 1 y 2
= W 3 T W 1 T x 1 W 2 T x 2
= W 3 T W 1 T 0 0 W 2 T x 1 x 2
= W T x 1 x 2 . . . ( 10 )
That is to say, use the linear transformation of each discriminant matrix can be merged into the linear transformation of using a matrix.In the substep computing, long-pending-and the number of times that calculates be L * (M 1+ M 2)+M 1N 1+ M 2N 2When matrix is merged into a matrix, long-pending-and the number of times that calculates be L * (N 1+ N 2).If N for example 1=N 2=500, M 1=M 2=200, and L=100, then in the substep computing, need to carry out to amass for 240,000 times-and calculate.In computing after a while, need carry out amassing for 100,000 times-and calculate.Because under the situation, therefore the calculated amount in batch computing can realize supercomputing less than the situation of front in the back.Can see obviously that from mathematic(al) representation in the time will reducing final dimension L, batch computing method can reduce calculated amount, are effective therefore.
(second embodiment)
According to above-mentioned situation, when different types of feature when for example direction character and density feature etc. are combined, then to repeat discriminant analysis to eigenvector, each vector in this eigenvector has all been carried out discriminant analysis.But, can be divided into a plurality of vectors with the corresponding a plurality of elements of feature, discriminant analysis can be at each element set as input feature vector, and the respective projection vector further need be done discriminant analysis.
To tell about facial image feature extraction equipment in a second embodiment.
As shown in Figure 4, comprise characteristics of image destructing device 41, be used for the facial image of input is carried out the density feature destructing according to the second facial image feature extraction equipment of inventing; Linear transformation device 42 is used for according to coming the projection properties vector with the corresponding discriminant matrix of eigenvector; And discriminant matrix group memory storage 43, be used to store above-mentioned each discriminant matrix.
The technology of extracting feature from facial image is included in that position such as eye positions facial image and density value is made as the method for vector characteristic, and is disclosed in the article as people such as above-mentioned W.Zhao.
Be that the pixel density value of image is used as input feature vector in second invention equally, just primitive character is handled.But, characteristics of image has bigger picture size, for example, and 42 * 54 pixels=2352 dimensions, and the coordinate convention of the center of left eye and right eye is (14,23) and (29,23).For so big characteristic dimension, directly carry out linear discriminant analysis by using limited training sample, be difficult to carry out high-precision feature extraction.Therefore, by the characteristics of image element is carried out destructing, the feature of institute's destructing is carried out discriminant analysis, and obtain the discriminant matrix, can be suppressed at caused characteristic deterioration when carrying out principal component analysis (PCA) etc.
One of method of characteristics of image being carried out destructing is a split image.For example, as shown in Figure 5, image is divided into 9 parts, and the size of each part is 14 * 18 pixels (=252 dimensions), topography with different size is set to eigenvector x:(i=1,2,3 ..., 9), and, the various piece image is carried out discriminant analysis, thereby obtain in advance and the corresponding discriminant matrix W of each eigenvector by using training sample i
Notice that when image was cut apart, it was overlapping to suppose that the zone has, this makes that in eigenvector characteristic quantity might be realized mapping based on the correlativity between the pixel in the juncture area.Therefore, after overlapping, can sample to each zone.
Owing to compare with original image, intrinsic dimensionality drops to 252 significantly, and therefore by from the hundreds of width of cloth each, promptly altogether in several thousand width of cloth facial images, a few width of cloth images of sampling can carry out high precision computation to the basis matrix based on discriminant analysis.If intrinsic dimensionality the same with initial characteristics big (2352 dimension), for obtain with based on the similar effect of the feature of discriminant analysis, just must several thousand single facial images of sampling.But, in fact be difficult to collect so a large amount of view data, so this technology can not realize.
Suppose phase one discriminant feature with the feature compression in each regional area to 20 dimensional features.In this case, the output characteristic vector that is produced becomes the eigenvector of 9 zones * 20 dimensions=180 dimensions.By further this eigenvector being carried out discriminant analysis, dimension can be dropped to effectively about 50 dimensions.This subordinate phase discriminant matrix also is stored in the discriminant matrix group memory storage 43, and linear transformation device 42 is carried out discriminant analysis once more when receiving 180 n dimensional vector ns of phase one discriminant feature.Notice that phase one discriminant matrix and subordinate phase discriminant matrix can be come out by calculated in advance, and be represented as equation (10).But, when 252 dimensions * 9 zones are compressed to 20 dimensions * 9 zones, and 180 dimensions were transformed to for 50 whens dimension, the calculating in two stages will reduce employed storage space, and makes calculated amount reduce to 1/2 or still less, is effective therefore.
By local and application discriminant analysis step by step, can extract the face characteristic that has high recognition performance.Suppose that in character recognition for example, Chinese character " greatly " is regarded as identical with Chinese character " dog ".In this case, if each complete character image is carried out principal component analysis (PCA), the composition that has big eigenwert with extraction, the feature that then helps to discern Chinese character " greatly " and Chinese character " dog " will be lost (for this reason, sometimes by using specific high sequence characteristics, rather than use that part of feature of obtaining by principal component analysis (PCA) that has big eigenwert, carry out similar character recognition).Image segmentation being become regional area in character recognition and extract the validity of discriminant feature, is similar to problem in the similar character recognition.Can think, and whole major component is carried out discriminant analysis compare, spatially limit the feature that is easy to discern and to guarantee that every unit dimension has higher precision.
In addition, characteristics of image destructing device 41 can be sampled from entire image, and sampled images is cut apart rather than entire image is cut apart, and forms eigenvector at each regional area.For example, when main feature was divided into 252 n dimensional vector ns divided by 9, sampling was carried out in 3 * 3 zones, as shown in Figure 6.That is to say that the image of being sampled becomes the slightly different breviaty image in position.These breviaty images are transformed into 9 eigenvectors through raster scanning.These eigenvectors are used as master vector, with computational discrimination formula component.These discriminant components can be integrated, to carry out discriminant analysis once more.
(the 3rd embodiment)
Tell about another embodiment of the present invention in detail referring now to accompanying drawing.Fig. 7 is a block diagram, shows the facial image matching system that has used people's face metadata to generate equipment according to of the present invention.
Come to tell about in detail the facial image matching system below.
As shown in Figure 7, facial image matching system according to the present invention comprises facial image input block 71, is used to import facial image; People's face metadata generation unit 72 is used to generate people's face metadata; People's face metadata storage unit 73 is used to store the people's face metadata that is extracted; Human face similarity degree computing unit 74 is used for calculating human face similarity degree according to people's face metadata; Facial image database 75 is used to store facial image; Control module 76 is used for storing the request of request/image retrieval according to image and comes the input of control chart picture, the generation of metadata, the storage of metadata, and the calculating of human face similarity degree; And display unit 77, be used to show facial image and other information.
People's face metadata generation unit 72 comprises regional Scissoring device 721, is used for from input facial image cutting human face region; And facial image feature deriving means 722, be used to extract the face characteristic of clipping region.People's face metadata generation unit 72 is by extracting the metadata that the face characteristic vector generates relevant facial image.
In the time will storing facial image, import human face photo etc. by using such as facial image input blocks 71 such as scanner or video recorders, and adjust the size and the position of people's face.Under the optional situation, people's face can be from directly inputs such as video recorders.In this case, be similar to disclosed people's face Detection Techniques etc. in the above-mentioned document of Moghaddam by use under the preferable case, survey people's face position of input picture, and the size of facial image etc. is automatically carried out normalization.
The words that need can be stored in the input facial image in the facial image database 75.In the storage facial image, people's face metadata generation unit 72 generates people's face metadata, and it is stored in people's face metadata storage unit 73.
When retrieval, facial image input block 71 input people face data, and the same with the situation of storage, people's face metadata generation unit 72 generates people's face metadata.People's face metadata of being created or be stored in people's face metadata storage unit 73 is perhaps directly delivered to human face similarity degree computing unit 74.
In search operaqtion, in the time will checking the identical data of the facial image that whether exists in the database with input in advance (recognition of face), calculated the similarity between each data of being stored in input facial image and the people's face metadata storage unit 73.Control module 76 is selected facial image according to the result who demonstrates highest similarity from facial image database 75, and shows facial image on display unit 77 etc.The operator checks the consistance of people's face in retrieving images and the memory image then.
Whether consistent with the facial image of being retrieved when checking in advance facial image by waiting appointment for ID number when (confirming face), whether human face similarity degree computing unit 74 calculates, consistent with the image of being retrieved to check by the facial image of ID appointment.If the similarity that calculates is lower than predetermined similarity, determines that then two width of cloth images are mutually internally inconsistent, and the result is presented on the display unit 77.Suppose that this system is used for entrance guard management.In this case, so that control automatically-controlled door rather than pass through the demonstration facial image, just can carry out entrance guard management by making control module 76 that the opening/closing control signal is sent to automatically-controlled door.
The facial image matching system moves in the above described manner.This operation can be implemented on computer system.For example, carry out the metadata generator program of metadata generation and the similarity calculation program of storer by storing being used for of below will telling about in detail, and carry out the program that these have used the programmed control processor, can realize the facial image coupling.
In addition, these programs can be recorded on the computer readable recording medium storing program for performing.
Come to tell about in detail the operation of this facial image matching system below, and more exactly, will tell about the operation of people's face metadata generation unit 72 and human face similarity degree computing unit 74.
(1) generation of people's face metadata
By use location and size normalization image I (x, y), people's face metadata generation unit 72 extracts the face characteristic amounts.When position and size are carried out normalization, under the preferable case eye locations is set at (16,24) and (31,24), and size is set at 46 * 56 pixels.In the situation below, image has been normalized to this size.
Then, regional Scissoring device 721 is cut into a plurality of predetermined regional areas with facial image.In the situation of above-mentioned image, for example zone for through normalized whole facial image (f (x, y)), another zone for the central area g of 32 * 32 pixels that are positioned at people's face center (x, y).Can carry out cutting to this zone, so that the position of two eyes is set to (9,12) and (24,12).
The central area of cutting people face why in the above described manner, even reason is to change by hair style of cutting also not to be subjected to scope that hair style etc. influences (for example, when confirming face is used for domestic robot, even hair style changes before and after having a bath to some extent, also can confirm), can extract stable characteristics.If hair styles etc. (for example do not change, identification in the scene of video clips), can improve recognition performance owing to use the image include hair style to carry out the authentication expectation, so cutting is the small-sized facial image that includes the core of the large-scale facial image of hair style and people's face.
Then, (x y) carries out two dimension discrete fourier transform to 722 couples two regional f through cutting of facial image feature deriving means, to extract the facial image feature.
Fig. 8 shows the more detailed structure of facial image feature deriving means 722.The facial image feature deriving means comprises Fourier transform device 81, is used for carrying out discrete Fourier transform (DFT) through normalized cutting image; Fourier output computation device 82 is used to calculate the power spectrum through the Fourier frequency components of Fourier transform; Linear transformation device 83, be used for and regard the one-dimensional characteristic vector as by the real part of the Fourier frequency components that calculates by Fourier transform device 81 and imaginary part being carried out the eigenvector that raster scanning obtains, and from the major component of eigenvector, extract the discriminant feature; Basis matrix memory storage 84 is used to store the basis matrix that is used for conversion; Linear transformation device 85 is used in the same manner as described above extracting from power spectrum the discriminant feature of major component; And basis matrix memory storage 86, be used to store the basis matrix that is used for conversion.Facial image feature deriving means 722 further comprises linear transformation device 88, be used for each of the discriminant feature of the discriminant feature of the real part of Fourier feature and imaginary part and power spectrum is normalized to and be of a size of 1 vector, and calculate by making up the discriminant feature of the vector that two eigenvectors obtain; And discriminant matrix stores device 89, be used to discriminant characteristic storage discriminant matrix.
After the Fourier frequecy characteristic that used this structure extraction, to the real part that comprises Fourier frequency components and imaginary part as the eigenvector of element and comprise the eigenvector of power spectrum as element, carry out the calculating of major component discriminant feature, and, thereby calculate the characteristic quantity of people's face once more to calculating by making up the eigenvector that above-mentioned vector obtains.
Come to tell about in more detail each operation below.
81 couples of input picture f of Fourier transform device (x, y) (x=0,1,2 ..., M-1, y=0,1,2 ..., N-1) carry out two-dimension fourier transform, with calculate according to equation (11) Fourier feature F (u, v).This method is widely known by the people, and (Kindai Kagaku Sha states in pp.20-26), has therefore omitted telling about it here in people such as for example Rosenfeld " Digital Image Processing ".
F ( u , v ) = Σ x = 0 M - 1 Σ y = 0 N-1 f ( x , y ) exp ( -2Πi ( xu M + yv N ) ) - - - ( 11 )
According to equation (12), the Fourier output computation device by obtain Fourier feature F (u, size is v) calculated the Fourier power spectrum | F (u, v) |.
| F ( u , v ) | = | Re ( F ( u , v ) ) | 2 + | Im ( F ( u , v ) ) | 2 - - - ( 12 )
The two-dimension fourier spectrum F that obtains by this way (u, v) and | F (u, v) | obtain by only two-dimentional real part being carried out conversion, the Fourier frequency components of acquisition is symmetrical.Therefore, these spectrograms as F (u, v) and | F (u, v) | have M * N component (u=0,1 ..., M-1, v=0,1 ..., N-1), and in these components half, just M * N/2 component (u=0,1 ..., M-1, v=0,1, ..., N-1) and remaining half component be of equal value basically.Therefore, by using half component, can carry out processing subsequently as eigenvector.Obviously, by omitting calculating, can make to calculate to obtain to simplify to those components of the element as the eigenvector in Fourier transform device 81 and the Fourier output computation device 82 of no use.
Then, linear transformation device 83 will extract as the characteristic quantity of frequecy characteristic and handle as vector.The segment space that will limit is by the basic vector (eigenvector) that the major component of the frequecy characteristic vector of corresponding clipping region trains and carry out discriminant analysis to obtain being set being used for by preparing the facial image collection in advance.Because this basic vector is all to have the method for telling about of widely knowing to obtain in the various documents of the works by comprising W.Zhao, has therefore omitted telling about it here.The reason of why directly not carrying out discriminant analysis is, the dimension of the eigenvector that obtains by Fourier transform is too big, can not directly carry out discriminant analysis.Though the existing problem in the major component discriminant analysis does not also achieve a solution, this technology is a kind of selection of extracting phase one eigenvector technology.Under the optional situation, can use the basis matrix that obtains by the method that repeats discriminant analysis.
That is to say, by the one-dimensional characteristic vector x that real part and imaginary part by the raster scanning frequecy characteristic are obtained 1Major component carry out discriminant analysis, can be in advance obtain to be stored in major component discriminant matrix Φ in the basis matrix memory storage 84 by training sample 1In this case, do not need always the Fourier feature to be handled as plural number, can it be handled as real number by imaginary part is processed into another characteristic element.
Suppose Ψ 1For being used for the basis matrix of major component, and W 1For carrying out the discriminant matrix that discriminant analysis obtains by vector, the discriminant matrix Φ of major component to major component 1Can be expressed as:
Φ 1 T = W 1 T Ψ 1 T ... ( 13 )
If the dimension that reduces by principal component analysis (PCA) is about 1/10 (about 200 dimensions) of the Fourier feature of primitive character, and is just enough.Afterwards, this discriminant matrix eases down to about 70 with dimension.This basis matrix can calculate from training sample in advance, and as information stores in basis matrix memory storage 84.
Being at fourier spectra equally | (u, v) under | the situation, by raster scanning, spectrum is expressed as the one-dimensional characteristic vector to F, and basis matrix Φ 2 T = Ψ 2 T W 2 T Can obtain by training sample in advance, this is to obtain by the major component of eigenvector being carried out discriminant analysis.
Each component to the Fourier feature calculates major component discriminant feature in this way, can obtain the real part of Fourier component and the eigenvector x of imaginary part 1Major component discriminant feature y 1, and the eigenvector x of power spectrum 2Major component discriminant feature y 2
Normalization device 87 sizes with the eigenvector that each obtained are normalized to and are of a size of 1 unit vector.In this case, vector length changes along with being used to measure the origin position of vector, so its reference point must be pre-determined.In this case, if by using eigenvector y from institute's projection iTraining sample in the mean vector m that obtains iSet reference point, just enough.By mean vector is set at reference point, eigenvector just be distributed in reference point around.In the situation of Gaussian distribution, especially, eigenvector is an isotropic distribution.This makes and be easy to limit the distributive province under the final situation about quantizing of eigenvector.
That is to say, by using mean vector m iWith eigenvector y iThe vector y that is normalized to unit vector and obtains i 0Can be expressed as:
y i 0 = y i - m i | y i - m i | . . . ( 14 )
By this way, the eigenvector y that the normalization device that provides will be associated with the real number and the imaginary number of Fourier power in advance 1With the eigenvector y that is associated with power 2Be normalized to unit vector.This just might be with the size normalization of two kinds of dissimilar characteristic quantities, and makes the distribution characteristics of eigenvector more stable.
In addition, because in dimension-reduction treatment, therefore by normalization, is that situation about carrying out in including the feature space of how deleted noise is compared with normalization wherein in the size that is used to discern these vectors in the required feature space, can realize antinoise normalization robustness.This normalization can be removed the influence such as the variable element such as variable componenent that are directly proportional with whole luminous intensity, and this is difficult to remove by simple linear transformation.
Normalized by this way eigenvector y 1 0And y 2 0Be combined into an eigenvector y in the mode identical with (equation 8), and by using by carrying out the discriminant matrix W that linear discriminate analysis obtains 3, the combined feature vector y of institute is projected to the discriminant space, thereby obtains output characteristic vector z.The discriminant matrix W that is used for this purpose 3Be stored in the discriminant matrix stores device 89, and linear transformation device 88 carries out the projection that is used for this purpose and calculate, for example calculate 24 dimensional feature vector z.
When output characteristic vector z quantized with 5 bits of every element, the size of each element was in advance by normalization.For example, according to the variance yields of each element, in advance the size of each element is carried out normalization.
Just, obtained each element z of eigenvector z in advance iTraining sample in the standard difference sigma i, and carry out normalization to satisfy z 0=16Z i/ 3 σ iSuppose that size is 5 bits.In this case, if size falls in-16~15 the scope through quantizing its value of back, just enough.
In this case, normalized calculating is exactly the inverse that each element multiply by standard deviation.Consider that the matrix ∑ has σ iAs diagonal entry, so normalized vector z 0Become z 0=∑ z.That is to say that owing to carried out simple linear transformation, ∑ can be applied to the discriminant matrix W in advance 3In, shown in equation (15).
W 3 O T = Σ W 3 T . . . ( 15 )
Carry out normalization by this way and can carry out the necessary scope correction of quantification.In addition, owing to carry out normalization by using standard deviation, therefore when arrangement (collation), between computation schema, only just can carry out calculating in the normal form of distance, thereby reduce the calculated amount when putting in order based on the Mahalanobis distance by calculating simple L2 normal form.
As mentioned above, facial image feature deriving means 722 is by this way from normalized image f (x, y) the middle eigenvector z that extracts f(x, y), facial image feature deriving means 722 has extracted eigenvector z with above-mentioned same way as with respect to an image g who obtains by the core of cutting people face gExtract two eigenvector z by end user's face metadata generation unit as face characteristic amount z fAnd z g
Notice that computing machine can be carried out above-mentioned people's face metadata formation sequence by computer program.In addition, this program can be recorded on the computer readable recording medium storing program for performing.
(2) human face similarity degree calculates
Tell about the operation of human face similarity degree computing unit 74 below.
By using the K dimensional feature vector z that obtains from two people's face metadata 1And z 2, human face similarity degree computing unit 74 calculates similarity d (z 1, z 2).
For example, the squared-distance by equation (16) calculates similarity.
d ( z 1 , z 2 ) = Σ i = 1 K α i | z 1 , i - z 2 , i | 2 --- ( 16 )
Here α iBe weight factor.For example, if adopt each feature dimensions z iThe inverse of standard deviation, just can carry out calculating based on the Mahalanobis distance.If according to equation (15) etc. eigenvector is carried out normalization in advance, then, therefore set the Mahalanobis distance owing to adopt variable value that basis matrix has been carried out normalization in advance.Under the optional situation, similarity can be calculated acquisition by the cosine of each eigenvector to be compared, shown in equation (3).
d ( Z 1 , Z 2 ) = Z 1 · Z 2 | Z 1 | | Z 2 | . . . ( 17 )
Note, when adopt apart from the time, the similarity that bigger value representation is lower (people's face is not similar), and when employing cosine, the similarity (people's appearance is similar) that bigger value representation is higher.
Tell about according to above, stored a facial image after, can retrieve by using a facial image.But, when certain people's face has been stored many images, and when using people's face can carry out retrieval, can carry out calculation of similarity degree to each of a plurality of people's face metadata at the storage end.
Similarly, when someone's face having been stored many images and by using many images to carry out when retrieval, the mean value or the minimum value of the similarity by obtaining each combination are calculated similarity, can calculate the similarity of people's face data.This shows by image sequence being seen as a plurality of images, matching system of the present invention can be applied in the image sequence people's face really with.
With reference to accompanying drawing, embodiments of the invention have been told about above.But, obviously the present invention can implement by computer executable program.
In addition, program can be recorded on the computer readable recording medium storing program for performing.
(the 4th embodiment)
Come to tell about in detail an alternative embodiment of the invention with reference to the accompanying drawings.The present invention is in order to improve the people's face metadata generation unit 72 according to the 3rd invention.According to the 3rd invention, calculated the discriminant feature of the eigenvector major component of the real part that has by the input facial image being carried out the Fourier frequency components that Fourier transform obtains and imaginary part, and with the eigenvector of power spectrum as element, and calculated once more by making up the discriminant feature of the eigenvector that each vector obtains, thereby calculate the characteristic quantity of people's face.In this case, because the Fourier power spectrum reflects whole characteristic quantities of input picture, therefore the composition that includes the input pixel of too many noise (for example, near the relative position of the pixel the face changes) equally with rest of pixels also is reflected in the power spectrum.As a result, even selected effective characteristic quantity, also can't obtain enough performances by discriminant analysis.In this case, input picture is divided into several regions, and each regional area is carried out Fourier transform.The power spectrum that passes through then to use each regional area is carried out discriminant analysis as characteristic quantity.Can reduce the influence of characteristic quantity that local repressentation goes out the zone of relatively poor differentiation performance (bigger class internal variance) by discriminant analysis like this.
Fig. 9 is used to explain embodiment, and shows the flow process that feature extraction is handled.In this embodiment, for example, 32 * 32 pixel regions are divided into 4 16 * 16 pixel regions, 16 8 * 8 pixel regions, 64 4 * 4 pixel regions, 256 2 * 2 pixel regions and 1024 1 * 1 pixel regions (this is identical with input picture basically, so input picture just need not cut apart and can use) (S1001).In each cut zone, carry out Fourier transform (S1002).Rated output is composed (S1003) then.All cut zone are carried out aforementioned calculation (S1004).Change the size (S1005) in zone.Change the size (S1006) of All Ranges.Figure 10 sums up treatment scheme.Extract 1024 * 5 dimensions=5120 dimensional feature amounts of all power spectrum in each zone that obtains by this way.
Since when the amount of training data hour, dimension is too big usually, therefore carries out principal component analysis (PCA), the basis that can reduce the principal component analysis (PCA) of dimension with prior acquisition in advance.For example, suitable dimension is about 300.Further the eigenvector of this dimension is carried out discriminant analysis, with obtain to reduce dimension and with demonstrate the corresponding base of the feature axis of differentiating performance preferably.Calculated in advance goes out and principal component analysis (PCA) and the corresponding base of discriminant analysis (this base refers to PCLDA projection base Ψ).
Having used the linearity of the projection base Ψ of this PCLDA base to calculate by use carries out projection to 5120 dimensional features, can obtain discriminant feature z.By this feature is further carried out quantification etc., can obtain the characteristic quantity of people's face.
It should be noted that symmetry, and remove and do not use high fdrequency component, can reduce the dimension of 5120 dimensional feature amounts by consideration Fourier power spectrum.Can realize training at a high speed like this, reduce the desired data amount, and realize the high velocity characteristic extraction.Therefore, can reduce dimension on demand under the preferable case.
Region Segmentation is become some and multiplexing by this way fourier spectra, and (under the situation of 1024 fritters) can be successively obtains to have the multiple representation formula of the characteristic quantity of conversion unchangeability and local characteristic quantity from the characteristic quantity that is equivalent to characteristics of image.By discriminant analysis, from multiple, unnecessary feature representation formula, select discerning effective characteristic quantity, thereby obtain the intensive characteristic quantity of recognition performance preferably can be provided.
Obtain the Fourier power spectrum by image is carried out nonlinear computation, can calculate like this by only image being carried out the discriminant analysis of calculating based on linearity and the validity feature amount that can not obtain.
Though told about the application of linear discriminate analysis above to major component, but, can carry out the subordinate phase feature extraction by using kernel discriminant analysis (having used the discriminant analysis that is called Kernel Fisher discriminant analysis (KFDA), Kernel discriminant analysis (KDA) or general discriminant analysis kernel technology such as (GDA)).
Can be about telling about in detail of kernel discriminant analysis with reference to people's such as Q.Liu document (non-references 3: " Kernel-based Optimized Feature Vectors Selection andDiscriminant Analysis for Face Recognition " (selecting and be used for the discriminant analysis of recognition of face based on the optimization eigenvector of Kernel), IAPR pattern-recognition international conference collection of thesis (ICPR), Vol.II, pp.362-365,2002) or the document of G.Baudat (non-references 4: " Generalized Discriminant Analysis Using a KernelApproach " (use Kernel method normalization discriminant analysis), neural calculating, Vol.12, pp.2385-2404,2000).
By using the kernel discriminant analysis to extract feature, can strengthen the effect that nonlinear characteristic is extracted, be beneficial to the extraction of validity feature.
But, in this case,, therefore even for principal component analysis (PCA) all need a large amount of internal memories and a large amount of training datas owing to will handle the large-scale eigenvector of 5120 dimensions.With reference to Figure 11,, each piece is carried out principal component analysis (PCA)/discriminant analysis separately for fear of this problem.Afterwards, carry out two stages discriminant analysis (linear discriminate analysis: LDA).Can reduce calculated amount like this.
In this case, come principal component analysis (PCA) and discriminant analysis are carried out in each zone by using 1024 dimensional feature amounts (if dimension drops to half, consider symmetry, then use 512 dimensional feature amounts), with prior acquisition basis matrix Ψ 1(i=0,1,2 ..., 5).Then by using its mean value to come each eigenvector is carried out normalization, and carry out subordinate phase LDA projection.
By by this way each piece being handled, can reduce training desired data number and computer resource.This can reduce the training needed time of optimization.
Note,, can realize supercomputing by omitting the basis matrix that vector normalized and calculated in advance are used for the basis matrix of PCLDA projection and are used for the LDA projection.
Figure 12 is used to explain another embodiment, and shows the flow process that feature extraction is handled.In this embodiment, consider the conversion ubiquity of Fourier power spectrum of regional area and the reliability of regional area, carried out this Region Segmentation in a plurality of stages (being two stages) in Figure 12, be used for discriminant analysis as characteristic quantity in multiresolution, to extract a plurality of power spectrum.Then, use the optimum that obtains by discriminant analysis to differentiate the space and carry out feature extraction.
(x y) has 32 * 32 pixels to suppose input picture f.In this case, as shown in figure 10, extract the power spectrum of entire image | F (u, v) |, by entire image being divided into the power spectrum of 4 16 * 16 pixel regions that 4 zones obtain | F 1 1(u, v) |, | F 2 1(u, v) |, | F 3 1(u, v) | and | F 4 1(u, v) |, and by entire image being divided into the power spectrum of 16 8 * 8 pixel regions that 16 zones obtain | F 1 2(u, v) |, | F 2 2(u, v) | ..., | F 16 2(u, v) | be used as eigenvector.
Consider the symmetry of the Fourier power spectrum of true picture, extraction 1/2 is just enough.Under the optional situation, for fear of the increase of the size of the eigenvector that is used for discriminant analysis, any high fdrequency component of can not sampling when forming eigenvector is used for differentiating.For example,, then can reduce required number of training, perhaps can shorten training and discern the required processing time if form eigenvector by corresponding 1/4 spectrum of sampling and low frequency component.If amount of training data is very little, then after having reduced intrinsic dimensionality by principal component analysis (PCA) in advance, carry out discriminant analysis again.
By using the eigenvector x that extracts by this way 2 fCarry out discriminant analysis with cut-and-dried training set, with prior acquisition basis matrix Ψ 2 fFig. 9 shows example (the major component linear discriminate analysis: PCLDA) that is used for extracting from major component the projection of discriminant feature.By using basis matrix Ψ 2 fCome projection properties vector x 2 f, and the average and the size of the eigenvector of institute's projection carried out normalization, thus calculate eigenvector y 2 f
Similarly, by using basis matrix Ψ 1 fLinear computing, the eigenvector x that real part and imaginary part by combination Fourier frequency are obtained 2 fCarry out projection, can obtain the eigenvector that dimension obtains reduction, and the average and the size of vector are carried out normalization, to calculate eigenvector y 1 fBy using discriminant base Ψ 3 f, once more to carrying out projection to obtain eigenvector z by making up the eigenvector that these vectors obtain fThis vector is quantified as for example 5 bits, to extract the face characteristic amount.
Suppose to be input as and be normalized into the facial image that size is 44 * 56 pixels.In this case, above-mentioned processing is applied to the central area of 32 * 32 pixels, to extract the face characteristic amount.In addition, from a plurality of cut zone of whole people's face of 44 * 56 pixels, comprise whole 44 * 56 pixel regions, 4 22 * 28 pixel regions and 16 11 * 14 pixel regions, also can extract the face characteristic amount.
Figure 13 shows another embodiment, wherein each regional area is carried out the PCLDA projection of the combination of real part, imaginary part and power spectrum, perhaps carry out the PCLDA projection of the feature that obtains by combination real part and imaginary part and the PCLDA projection of power spectrum respectively, and carry out the LDA projection at last, as shown in figure 14.
(the 5th embodiment)
Come to tell about in detail another embodiment of the present invention with reference to the accompanying drawings.
This embodiment has used face characteristic describing method of the present invention and face characteristic descriptor.The description that Figure 15 shows the face characteristic amount is used as the example that face characteristic is described, and it has used ISO/IEC FDIS 15938-3: the DDL representation (Description Definition Language representation) in " Information technology Multimediacontent description interface-Part 3:Visual " (infotech Multimedia Content Description Interface).
In this case, the face characteristic of " AdvancedFaceRecognition " by name in order to describe provides name to be called the element of " FourierFeature " and " CentralFourierFeature ".Each " FourierFeature " and " CentralFourierFeature " are 5 bit unsigned integer, and this represents that it can have the component of 24 dimensions to 63 dimensions.
Figure 16 shows and is using the scale-of-two grammer as the rule under the situation of data representation.According to this rule, the size of the matrix form component of FourierFeature and CentralFourierFeature is stored in the 6 bit unsigned integer fields with numOfFourierFeature and numOfCentralFourierFeature, and each component of FourierFeature and CentralFourierFeature all is to store with the form of 5 bit unsigned integer.
Come to tell about in more detail the description of having used these face characteristics of the present invention below.
V.numofFourierFeature
This field has been determined the component number of FourierFeature.The scope of allowing is from 24 to 63.
VI.numOfCentralFourierFeature
This field has been determined the component number of CentralFourierFeature.The scope of allowing is from 24 to 63.
VII.FourierFeature
This element representation is based on the face characteristic of the cascade LDA of the Fourier feature of normalization facial image.Normalized facial image obtains by original image being scaled to 56 row, and each row has 46 lumen values.The center of two of facial image eyes should be positioned at the 24th row after the normalization, and left eye and right eye are respectively at the 16th row and the 31st row.
The FourierFeature element is from two eigenvectors; One is fourier spectra vector x 1 f, another is polylith fourier modulus vector x 2 fFigure 17 has explained the leaching process of FourierFeature.A given normalization facial image, extract element and need carry out 5 steps:
(1) extracts fourier spectra vector x 1 f,
(2) extract polylith fourier modulus vector x 2 f,
(3) to using PCLDA basis matrix Ψ 2 f, Ψ 2 fEigenvector carry out projection, and they are normalized into unit vector y 1 f, y 2 f,
(4) to using LDA basis matrix Ψ 3 fThe associating Fourier vector y of unit vector 3 fCarry out projection,
(5) to the vector z of institute's projection fQuantize.
The extraction of step 1) fourier spectra vector
Given normalization facial image f (x, y), by following equation calculate f (x, fourier spectra F y) (u, v):
F ( u , v ) = Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) exp ( - 2 πi ( xu M + yv N ) ) , ( u = 0 , . . . , M - 1 ; v = 0 , . . . , N - 1 ) . . . ( 18 )
M=46 wherein, N=56.Fourier spectra vector x 1 fThe scanning that is defined as fourier spectra divides quantity set.Figure 18 shows the scan method of fourier spectra.In the Fourier scope, only to two rectangular areas: regional A and area B scan.Scanning rule has summary in Figure 19.Here, S R(u v) represents the upper left corner coordinate of region R, E R(u v) represents the lower right corner coordinate of region R.Therefore, fourier spectra vector x 1 fBe expressed as:
Figure C0380903200381
Fourier spectra vector x 1 fDimension be 644.
Step 2) polylith fourier modulus vector x 2 fExtraction
In the normalization facial image, from the fourier modulus of parts of images, extract polylith fourier modulus vector.Can use the image of three types to be used as parts of images: (a) complete image, (b) 1/4th images, and (c) 1/16 image.
(a) complete image
By (x y) is cut into the image of 44 * 56 sizes and remove the limit row of both sides, has obtained complete image f with normalized image f 1 0(x y), is expressed as:
f 1 0(x,y)=f(x+1,y)
(x=0,1,...,43;y=0,1,2,...,55) ...(20)
(b) 1/4th images
By with complete image f 1 0(x y) is divided into 4 f k 1(x, y) (k=1,2,3,4) have obtained 1/4th images, are expressed as:
f k 1(x,y)=f 1 0(x+22s k 1,y+28t k 1)
(x=0,1,...,21;y=0,1,...,27) ...(21)
S wherein k 1=(k-1) %2, t k 1=(k-1)/2.
(c) 1/16 image
By with complete image f 1 0(x y) is divided into 16 f k 2(x, y) (k=1,2,3 ..., 16), obtained 1/16 image, be expressed as:
f k 2(x,y)=f 1 0(x+11s k 2,y+14t k 2)
(x=0,1,...,10;y=0,1,...,13) ...(22)
S wherein k 2=(k-1) %4, t k 2=(k-1)/4.
Can calculate the acquisition fourier modulus by these images | F k j(u, v) |, method is as follows:
F k j ( u , v ) = Σ x = 0 M j - 1 Σ y = 0 N j - 1 f k j ( x , y ) exp ( - 2 πi ( xu M j + yv N j ) ) , . . . ( 23 )
| F k j ( u , v ) | = Re [ F k j ( u , v ) ] 2 +Im [ F k j ( u , v ) ] 2
Wherein, M jWidth for each parts of images that is to say, M 0=44, M 1=22, M 2=11.N jRepresent the height of each parts of images, that is to say, N 0=56, N 1=28, N 2=14.
By scanning 1) complete image (k=1), 2) 1/4th images (k=1,2,3,4) and 3) 1/16 image (and k=1,2 ..., 16) each amplitude | F k j(u, v) | low frequency region obtains polylith fourier modulus vector.The definition of scanning area as shown in figure 19.
Therefore, polylith fourier modulus vector x 2 fBe expressed as follows:
Figure C0380903200401
x 2 fDimension be 856.
Step 3) PCLDA projection and vector normalization
By using PCLDA basis matrix Ψ 1 fAnd Ψ 2 f, to fourier spectra vector x 1 fWith polylith fourier modulus vector x 2 fCarry out projection respectively, and it is normalized into unit vector y 1 fAnd y 2 fNormalized vector y k f(k=1,2) are expressed as:
y k f = Ψ k f T x k f - m k f | Ψ k f T x k f - m k f | . . . ( 25 )
Wherein, PCLDA basis matrix Ψ k fWith mean value vector m k fBe to pass through x respectively k fThe major component and the average of the vector of institute's projection carry out linear discriminate analysis and the basis matrix that obtains.The look-up table that their value can go out with reference to calculated in advance.y 1 fAnd y 2 fDimension be respectively 70 and 80.
The LDA projection of step 4) associating Fourier vector
By combination normalized vector y 1 fAnd y 2 fForm 150 dimension associating Fourier vector y 3 f, and by using LDA basis matrix Ψ 3 fIt is carried out projection.Projection vector z fBe expressed as follows:
z f = Ψ 3 f T y 3 f
= Ψ 3 f T y 1 f y 2 f . . . ( 26 )
Step 5) quantizes
Formula below using, in the scope of 5 bit unsigned integer to z fEach element carry out cutting:
Figure C0380903200413
Be stored as FourierFeature through the element that quantizes.FourierFeature[0] the expression first quantification element w 0 f, and FourierFeature[numOfFourierFeature-1] corresponding to numOfFourierFeature element w f NumOfFourierFeature-1
IV.CentralFourierFeature
This element representation is based on the face characteristic of the cascade LDA of the Fourier feature of normalization facial image core.Extracting mode and the FourierFeature of CentralFourierFeature are similar.
From (7,12) beginning, with image f (x y) is cut into 32 * 32 images, then obtain its core g (x, y), equation is as follows:
g(x,y)=f(x+7,7+12)(x=0,1,...,31;y=0,1,...,31)...(28)
The extraction of step 1) center fourier spectra vector
G (x, fourier spectra G y) (u v) is calculated as follows:
G ( u , v ) = Σ x = 0 M - 1 Σ y = 0 N - 1 g ( x , y ) exp ( - 2 πi ( xu M + yv N ) ) , ( u=0,...,M-1;v=0,...,N-1 ) . . . ( 29 )
M=32 wherein, N=32.(u v), has produced the 256 fourier spectra vector x of fibrillar center as the defined fourier spectra G of Figure 20 by scanning 1 g
Step 2) extracts polylith center fourier modulus vector
From (a) core g 1 0(x, y), (b) 1/4th image g k 1(x, y) (k=1,2,3,4), and (c) 1/16 image g k 2(x, y) (k=1,2,3 ..., 16) fourier modulus in extract center polylith fourier modulus vector x 2 g
(a) core
g 1 0(x,y)=g(x,y)(x=0,1,...,31;y=0,1,...,31)...(30)
(b) 1/4th images
g k 1(x,y)=g(x+16s k 1,y+16t k 1)
(x=0,1,...,15;y=0,1,...,15) ...(31)
S wherein k 1=(k-1) %2, t k 1=(k-1)/2
(c) 1/16 image
g k 2(x,y)=g 1 0(x+8s k 2,y+8t k 2)
(x=0,1,...,7;y=0,1,...,7) ...(32)
S wherein k 2=(k-1) %4, t k 2=(k-1)/4
The fourier modulus of each image | G k j(u, v) | computing method are as follows:
G k j ( u , v ) = Σ x = 0 M j - 1 Σ y = 0 N j - 1 g k j ( x , y ) exp ( - 2 πi ( xu M j + yv N j ) ) , . . . ( 33 )
| G k j ( u , v ) | = Re [ G k j ( u , v ) ] 2 + Im [ G k j ( u , v ) ] 2
M wherein 0=32, M 1=16, M 2=8, N 0=32, N 1=16, and N 2=8.By scanning as defined each amplitude of Figure 20 | G k j(u, v) |, obtain polylith center fourier modulus vector x 2 g
Step 3~5) processing in is identical with those FourierFeature, for example, and the Fourier vector y of United Center 3 gComprise normalized vector y 1 gAnd y 2 gCalculated in advance also prepares to be used for the basis matrix Ψ of CentralFourierFeature with the form of look-up table 1 g, Ψ 2 gAnd Ψ 3 gAnd mean value vector m 1 gAnd m 2 g
The size of CentralFourierFeature is represented with numOfCentralFourierFeature.
The face characteristic data of description of Huo Deing is very succinct from describing length by this way, but shows very high recognition performance, is the expression formula that is effective to data storage and transmission therefore.
Notice that the present invention can implement by computer executable program.In the situation of the 5th embodiment, by describing with computer-readable program, and realize these program functions on computers by the represented function of step 1~step 5 among Figure 17, just can implement the present invention.
In addition, this program can be recorded on the computer readable recording medium storing program for performing.
When example that will be is as shown in figure 17 realized as equipment, all or some function that can realize being write in the block diagram of Figure 21.More exactly, can realize all or some device in normalization facial image output unit 211, fourier spectra vector extraction element 212, polylith fourier modulus vector extraction element 213 and PCLDA projection/vector normalization device 214.
According to above-mentioned each embodiment, for each element vectors, from the input pattern eigenvector, extract the eigenvector that is effective to differentiate by discriminant analysis, and, once more the eigenvector that is obtained is carried out feature extraction by discriminant analysis by using the discriminant matrix.When reducing intrinsic dimensionality, do the minimizing that can suppress the characteristic quantity that is effective to differentiate like this, and carry out conversion being used for the vector that validity feature extracts.
For the required training sample number condition of limited of discriminant analysis wherein, even there are a large amount of pattern feature amounts, above-mentioned each embodiment remains effective.That is to say, do not need to use principal component analysis (PCA) just can reduce intrinsic dimensionality, suppressed losing of the feature that is effective to discern simultaneously.
As mentioned above, be used for coming the eigenvector converter technique of compressive features dimension by extracting the eigenvector that is effective to discern from the input feature vector vector, according to image characteristic extracting method of the present invention, image characteristics extraction equipment, and the recording medium that is used to store corresponding program in the area of pattern recognition all is well-adapted.

Claims (28)

1. pattern feature extraction method comprises:
From input pattern, extract a plurality of input vectors;
Correspond respectively to the basis matrix of input vector by use, the projection input vector is to obtain projection vector; And
By using the discriminant matrix corresponding to the associating vector, the associating vector that a plurality of projection vectors of combination obtain is passed through in projection, thereby extracts the feature of input pattern.
2. pattern feature extraction method as claimed in claim 1 is characterized in that:
Prepare transition matrix in advance, this transition matrix equals described basis matrix and described discriminant combinations of matrices, and
By using described transition matrix, projection is extracted the feature of described input pattern by the associating input vector that a plurality of projection vectors of combination obtain.
3. pattern feature extraction method according to claim 1 and 2 is characterized in that: the basis matrix corresponding to input vector serves as the discriminant matrix that is used for input vector.
4. pattern feature extraction method according to claim 1 and 2, it is characterized in that: the basis matrix corresponding to input vector is the basis matrix of transition matrix and the appointment of discriminant matrix, this transition matrix is used to extract the principal component vector of input vector, and this discriminant matrix is used for the major component matrix.
5. pattern feature extraction method according to claim 1 and 2, it is characterized in that, the step of extracting input vector comprises: extract the vector that its element is a plurality of pixel values, these pixel values are to obtain from each the sample point of each sample point group as a plurality of predetermined sample point groups the image of input pattern.
6. pattern feature extraction method according to claim 5 is characterized in that, the sample point group comprises the group with the sample point pixel the parts of images that obtains as the regional area from image, thereby extracts the feature of image.
7. pattern feature extraction method according to claim 5 is characterized in that, the sample point group comprises the group that has as the sample point pixel each the reduction image that obtains from image, thereby extracts the feature of image.
8. pattern feature extraction method according to claim 1 and 2, it is characterized in that, the step of extracting input vector comprises: for as each of a plurality of regional areas of the image of input pattern, extract the characteristic quantity that calculates from each regional area, as input vector.
9. pattern feature extraction method according to claim 1 and 2, it is characterized in that, the step of extracting input vector comprises: the image as input pattern is carried out Fourier transform, from the fourier spectra of image, extract the fourier spectra vector as input vector, and from the fourier modulus spectrum of image, extract the fourier modulus vector, thereby extract the feature of image as input vector.
10. pattern feature extraction method according to claim 9, it is characterized in that, from image, extract a plurality of parts of images or a plurality of reduction image, and the fourier spectra vector of extraction unit partial image or reduction image or fourier modulus vector are to extract the feature of image.
11. pattern feature extraction method as claimed in claim 1 is characterized in that the step of extracting a plurality of input vectors also comprises the steps:
Use the different numbers of cutting apart that input picture is cut apart, to obtain a plurality of blocks of images; And
Extract the fourier modulus of piece image,
Thereby extract a plurality of input vectors from input picture.
12. pattern feature extraction method according to claim 11 is characterized in that comprising the steps:
Scan fourier modulus with extraction polylith fourier modulus vector, and
Use basis matrix projection polylith fourier modulus vector to obtain projection vector.
13. pattern feature extraction method according to claim 12 is characterized in that also comprising the steps:
The normalization projection vector is to obtain normalized vector.
14. pattern feature extraction method according to claim 12 is characterized in that:
Basis matrix comprises the basis matrix of transition matrix and the appointment of discriminant matrix, and transition matrix is used to extract the principal component vector of polylith fourier modulus vector, and the discriminant matrix is corresponding to principal component vector.
15. any one described pattern feature extraction method according to claim 11 to 14, it is characterized in that obtaining the step of a plurality of blocks of images, obtain to have whole input picture as the entire image of a piece image, by whole input picture being divided into four pieces obtain four piece images and by input picture being divided at least one of 16 piece images that 16 pieces obtain.
16. a pattern feature extraction method comprises:
From input pattern, extract a plurality of input vectors;
Correspond respectively to the basis matrix of input vector by use, the projection input vector is to obtain projection vector;
The normalization projection vector is to obtain normalized vector; And
Use is corresponding to the discriminant matrix of associating vector, and projection is by the associating vector of a plurality of normalized vectors acquisitions of combination, thus the feature of extraction input pattern.
17. pattern feature extraction method according to claim 16 is characterized in that: the basis matrix corresponding to input vector serves as the discriminant matrix that is used for input vector.
18. pattern feature extraction method according to claim 16, it is characterized in that: the basis matrix corresponding to input vector is the basis matrix of transition matrix and the appointment of discriminant matrix, this transition matrix is used to extract the principal component vector of input vector, and this discriminant matrix is used for the major component matrix.
19. pattern feature extraction method according to claim 16, it is characterized in that, the step of extracting input vector comprises: extract the vector that its element is a plurality of pixel values, these pixel values are to obtain from each the sample point of each sample point group as a plurality of predetermined sample point groups the image of input pattern.
20. pattern feature extraction method according to claim 19 is characterized in that, the sample point group comprises the group with the sample point pixel the parts of images that obtains as the regional area from image, thereby extracts the feature of image.
21. pattern feature extraction method according to claim 19 is characterized in that, the sample point group comprises the group that has as the sample point pixel each the reduction image that obtains from image, thereby extracts the feature of image.
22. pattern feature extraction method according to claim 16, it is characterized in that, the step of extracting input vector comprises: for as each of a plurality of regional areas of the image of input pattern, extract the characteristic quantity that calculates from each regional area, as input vector.
23. pattern feature extraction method according to claim 16, it is characterized in that, the step of extracting input vector comprises: the image as input pattern is carried out Fourier transform, from the fourier spectra of image, extract the fourier spectra vector as input vector, and from the fourier modulus spectrum of image, extract the fourier modulus vector, thereby extract the feature of image as input vector.
24. pattern feature extraction method according to claim 23, it is characterized in that, from image, extract a plurality of parts of images or a plurality of reduction image, and the fourier spectra vector of extraction unit partial image or reduction image or fourier modulus vector are to extract the feature of image.
25. a pattern feature extraction device comprises:
The vector extraction element is used for extracting a plurality of input vectors from input pattern,
The basis matrix memory storage is used to store corresponding with input vector respectively basis matrix,
The linear transformation device, the basis matrix that is used for using described basis matrix memory storage to store comes the projection input vector with the acquisition projection vector,
Discriminant matrix stores device is used to store the discriminant matrix corresponding to the associating vector, and the associating vector is to obtain by a plurality of projection vectors that described linear transformation device obtains by combination, and
The second linear transformation device is used for the discriminant matrix that uses described discriminant matrix stores device to store, the associating vector that projection obtains by a plurality of projection vectors of combination, thus extract the feature of input pattern.
26. pattern feature extraction device as claimed in claim 25 is characterized in that:
Described basis matrix memory storage and described discriminant matrix stores device are combined into the transition matrix memory storage, are used to store transition matrix, and this transition matrix equals described basis matrix and described discriminant combinations of matrices, and
Described linear transformation device and the described second linear transformation device are combined into conversion equipment, be used for by using described transition matrix, the associating input vector that projection obtains by a plurality of projection vectors of combination, thus use described transition matrix to extract the feature of described input pattern from described input vector.
27. a pattern feature extraction device comprises:
The vector extraction element is used for extracting a plurality of input vectors from input pattern,
The basis matrix memory storage is used to store corresponding with input vector respectively basis matrix,
The linear transformation device, the basis matrix that is used for using described basis matrix memory storage to store comes the projection input vector with the acquisition projection vector,
The normalization device is used for the normalization projection vector with the acquisition normalized vector,
Discriminant matrix stores device is used to store the discriminant matrix corresponding to the associating vector, and the associating vector is to obtain by a plurality of normalized vectors that described normalization device obtains by combination, and
The second linear transformation device is used for the discriminant matrix that uses described discriminant matrix stores device to store, the associating vector that projection obtains by a plurality of normalized vectors of combination, thus extract the feature of input pattern.
28. a pattern feature extraction method is characterized in that comprising the steps:
By using the predetermined computation expression formula to calculate the fourier spectra of input normalized image, obtain the fourier spectra vector,
Extract polylith fourier modulus vector from the fourier modulus of the parts of images of normalized image,
Carry out the eigenvector projection of fourier spectra vector and polylith strength vector by the use basis matrix, thereby obtain each normalized vector,
The combination normalized vector obtaining the Fourier vector of coupling, and uses second basis matrix so that coupling value is converted to projection vector, and
Extract the Fourier feature by quantizing projection vector.
CNB038090325A 2002-07-16 2003-07-04 Pattern characteristic extraction method and device for the same Expired - Lifetime CN100421127C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2002207022 2002-07-16
JP207022/2002 2002-07-16
JP300594/2002 2002-10-15
JP68916/2003 2003-03-13

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN2007101121408A Division CN101082955B (en) 2002-07-16 2003-07-04 Pattern characteristic extraction method and device for the same

Publications (2)

Publication Number Publication Date
CN1802666A CN1802666A (en) 2006-07-12
CN100421127C true CN100421127C (en) 2008-09-24

Family

ID=36811845

Family Applications (2)

Application Number Title Priority Date Filing Date
CN2007101121408A Expired - Lifetime CN101082955B (en) 2002-07-16 2003-07-04 Pattern characteristic extraction method and device for the same
CNB038090325A Expired - Lifetime CN100421127C (en) 2002-07-16 2003-07-04 Pattern characteristic extraction method and device for the same

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN2007101121408A Expired - Lifetime CN101082955B (en) 2002-07-16 2003-07-04 Pattern characteristic extraction method and device for the same

Country Status (1)

Country Link
CN (2) CN101082955B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869235B (en) * 2015-01-20 2019-08-30 阿里巴巴集团控股有限公司 A kind of safety door inhibition method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07296169A (en) * 1994-04-20 1995-11-10 N T T Data Tsushin Kk Arithmetic system for feature transformed matrix for dimensional compression of feature vector for pattern recognition
JPH10171988A (en) * 1996-12-05 1998-06-26 Matsushita Electric Ind Co Ltd Pattern recognizing/collating device
CN1247615A (en) * 1997-02-14 2000-03-15 惠普公司 Method and appts. for recognizing patterns

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07296169A (en) * 1994-04-20 1995-11-10 N T T Data Tsushin Kk Arithmetic system for feature transformed matrix for dimensional compression of feature vector for pattern recognition
JPH10171988A (en) * 1996-12-05 1998-06-26 Matsushita Electric Ind Co Ltd Pattern recognizing/collating device
CN1247615A (en) * 1997-02-14 2000-03-15 惠普公司 Method and appts. for recognizing patterns

Also Published As

Publication number Publication date
CN101082955B (en) 2012-11-28
CN1802666A (en) 2006-07-12
CN101082955A (en) 2007-12-05

Similar Documents

Publication Publication Date Title
EP1522962B1 (en) Pattern characteristic extraction method and device for the same
Vranic et al. 3D model retrieval.
US7630526B2 (en) Method and apparatus for face description and recognition
US7869657B2 (en) System and method for comparing images using an edit distance
Cao Singular value decomposition applied to digital image processing
KR100731937B1 (en) Face meta-data creation
Chang et al. Extracting multidimensional signal features for content-based visual query
US7164781B2 (en) Method and apparatus of recognizing face using 2nd-order independent component analysis (ICA)/principal component analysis (PCA)
US20040086185A1 (en) Method and system for multiple cue integration
JP4770932B2 (en) Pattern feature extraction method and apparatus
Alexandrov et al. Adaptive filtering and indexing for image databases
Kaur et al. Comparative study of facial expression recognition techniques
Suresh et al. Optimization and Deep Learning–Based Content Retrieval, Indexing, and Metric Learning Approach for Medical Images
Sufyanu et al. Feature extraction methods for face recognition
CN100421127C (en) Pattern characteristic extraction method and device for the same
Veinidis et al. On the retrieval of 3D mesh sequences of human actions
Wang et al. Discriminative feature projection for camera model identification of recompressed images
Eickeler Face database retrieval using pseudo 2D hidden Markov models
Mirajkar et al. Content based Image Retrieval using the Domain Knowledge Acquisition
JP2004038937A (en) Method and device for face description and recognition using high-order eigen-component
Fang et al. Optimization strategy of computer programming for mathematical algorithm of facial recognition model
Strobel et al. MMAP: Modified Maximum a Posteriori algorithm for image segmentation in large image/video databases
KR20040101221A (en) Method and apparatus for face description and recognition using high-order eigencomponents
Seales et al. Efficient content extraction in compressed images
CN113220916A (en) Image retrieval method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1090157

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1090157

Country of ref document: HK

CX01 Expiry of patent term

Granted publication date: 20080924

CX01 Expiry of patent term