CN103714340A - Self-adaptation feature extracting method based on image partitioning - Google Patents

Self-adaptation feature extracting method based on image partitioning Download PDF

Info

Publication number
CN103714340A
CN103714340A CN201410010605.9A CN201410010605A CN103714340A CN 103714340 A CN103714340 A CN 103714340A CN 201410010605 A CN201410010605 A CN 201410010605A CN 103714340 A CN103714340 A CN 103714340A
Authority
CN
China
Prior art keywords
image
sub
training
feature
collection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410010605.9A
Other languages
Chinese (zh)
Other versions
CN103714340B (en
Inventor
刘靳
靳洋
姬红兵
张文博
王海鹰
刘艳丽
葛倩倩
孙宽宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410010605.9A priority Critical patent/CN103714340B/en
Publication of CN103714340A publication Critical patent/CN103714340A/en
Application granted granted Critical
Publication of CN103714340B publication Critical patent/CN103714340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a self-adaptation feature extracting method based on image partitioning. The problem that according to an existing feature extracting method based on PCA, images need vectorization, so that recognition results are not ideal after feature extracting is mainly solved. The method comprises the steps that (1) an image set is input and is divided into a training image set and a testing image set randomly; (2) images in the training image set are partitioned, and training sub-block image sets are formed; (3) the pixel gray value variance sums of the training image set and the training sub-block image sets are computed respectively; (4) the pixel gray value variance sums of the training image set and the training sub-block image sets are compared, and a feature best projection matrix is gained; (5) the image features of the training image set and the testing image set are extracted; and (6) images of the testing image set are recognized, and feature extracting effect is verified. Compared with the prior art, the self-adaptation feature extracting method has the advantages of being high in recognition rate, wide in adaptability and the like, feature extracting is effectively carried out on the images, and the method can be used for targeting identification.

Description

Self-adaptive feature extraction method based on image block
Technical field
The invention belongs to field of computer technology, further relate to computer picture field of information processing, specifically the self-adaptive feature extraction method based on image block.The present invention obtains sub-image by image being carried out to piecemeal, the pixel gray-scale value variance of computed image and with the pixel gray-scale value variance of each sub-image and and carry out size and judge, according to judged result, use adaptively two-dimension principal component analysis (Two Dimension Principal Component Analysis, 2DPCA) or small echo principal component analysis (Wavelet Principal Component Analysis, Wavelet PCA) carry out feature extraction, effectively realized the feature extraction to image, for follow-up target identification provides reliable information.
Background technology
Image characteristics extraction, as the basis of image object identification, is a gordian technique in automatic target identification.In recent years, image feature extraction techniques research has been made significant headway, and especially the algorithm based on principal component analysis (Principal Component Analysis, PCA) is widely used in image characteristics extraction field.But, this class algorithm need to carry out vectorization processing by image slices vegetarian refreshments gray-scale value, conventionally image vectorization has been ignored the structural information of image after processing, unsatisfactory for complicated image recognition result, and image vector dimension can be very high after vectorization, make computational complexity in calculating subsequently increase.How automatically effectively and rapidly to complete image characteristics extraction and identification, be focus and the difficulties of research both at home and abroad always.
By K.Fukunaga, proposed the concept of PCA, after the people such as Kirby and Sirovich are generalized to image field by PCA, the method has been widely used in the fields such as image characteristics extraction and identification, image quality evaluation, image watermark.The implementation procedure of the feature extracting method based on PCA is: first, image is carried out to vectorization processing; Secondly, obtain the covariance matrix of image after vectorization; Again, covariance matrix is carried out to Eigenvalues Decomposition; Finally, the unit character vector that eigenwert is corresponding is called projection matrix, and image is the feature as image by the result after projection matrix projection.The weak point of the feature extracting method based on PCA is, first, by the process of image vector, the dimension of image vector generally can be very high, carries out analysis meeting and run into small sample problem, and often need to expend the plenty of time on the image vector of higher-dimension; Secondly, what PCA extracted conventionally is the global characteristics of image, has ignored structural information and the local message of image, and recognition effect is unsatisfactory.
Chen ambush paper " Modular PCA and the application in recognition of face thereof " (< < computer engineering and design > >, article numbering: 1000-7024(2007) 08-1889-04).The implementation procedure of the method is, first image carried out to piecemeal, and the sub-image that piecemeal is obtained utilizes PCA to carry out feature extraction and discriminatory analysis.The feature of the method is to effectively utilize the structural information of image, with the obvious advantage to the image of expression and illumination High variation.The deficiency of the method is to have ignored after to image block the difference of variation tendency between each sub-image, and the unified method of PCA that adopts is analyzed, be unfavorable for realizing better recognition effect, and for the image of low variation, the method do not have good applicability.
The patent " face identification method and system " of Shanghai Yi Yuan telecom technology co., ltd application (number of patent application: 201110424252.3, publication No.: CN103164689).The implementation procedure of the method is first by wavelet transformation, respectively training image and test pattern to be carried out to pre-service, then by PCA method, respectively pretreated training image and test pattern are carried out to feature extraction and obtain training sample feature and test sample book feature, finally adopt SVM (support vector machine, Support Vector Machine) algorithm carries out Classification and Identification to the training sample feature extracting and test sample book feature, obtains recognition result.The weak point of the method is, although use wavelet transformation to carry out pre-service to image, obtained the raising of discrimination, but wavelet transformation only has reasonable effect for the smaller image of image change degree, unsatisfactory for the image pretreating effect that image change degree is larger, and the method still will carry out vectorization to image, destroyed the structural information of image, be unfavorable for realizing better recognition effect.
Summary of the invention
The object of the invention is the problem existing for the existing feature extracting method based on PCA, a kind of self-adaptive feature extraction method based on image block is proposed, utilize method of partition that image is carried out to piecemeal, and according to the pixel gray-scale value variance of image with the pixel gray-scale value variance of sub-image with judge, according to judged result, sub-block image adaptive is selected to carry out feature extraction with 2DPCA or Wavelet PCA, substantially realize the accurate extraction to characteristics of image, improved recognition effect.
Gordian technique of the present invention is: first input picture collection is randomly divided into training plan image set and test pattern image set, each image of training plan image set is divided into N sub-image, then i sub-block image sets of every image N sub-image become to i training sub-image collection, i=1,2 ..., N, respectively the pixel gray-scale value variance of calculation training image set and with the pixel gray-scale value variance of N training sub-image collection with, and compare successively.If the pixel gray-scale value variance of training sub-image collection and large or both equate, use the method for 2DPCA to ask for the feature best projection matrix of training sub-image collection, if the pixel gray-scale value variance of training sub-image collection and less, uses the method for Wavelet PCA to ask for the feature best projection matrix of training sub-image collection.N sub-image the M of training plan image set being opened to image carries out projection with the feature best projection matrix of corresponding training sub-image collection respectively, obtains the feature of all each sub-images of image of training plan image set, completes the feature extraction to training plan image set.For test pattern image set, every image is divided into N sub-image, all sub-images of every image carry out projection with the feature best projection matrix of corresponding training sub-image collection, the feature that obtains all sub-images of all images of test pattern image set, completes the feature extraction to test pattern image set.The feature of each all sub-image of image of test pattern image set is calculated by Euclidean distance with the feature of the corresponding sub-image of all images of training plan image set, result of calculation is normalized, and the normalization result of N the sub-block image calculation that belongs to same training plan image set image Euclidean distance is out summed up, form the similarity measurement of feature.Finally with nearest neighbor method, similarity measurement is judged, test pattern is identified, complete the checking of image characteristic extracting method.
Concrete steps of the present invention are as follows:
(1) input picture collection, and random division is training plan image set and test pattern image set
If input picture collection has K to open image, K=Class * Pic, Class is the concentrated image category number of input picture, Pic is the number of each class image, the pixel gray-scale value of input picture collection image is read by matrix form, according to random division method, the M of input picture collection is opened to image as training plan image set, be expressed as subset all, M=Class * Pictrain, Pictrain is the number that each class image of input picture collection is used as training plan image set image, input picture concentrates remaining K-M to open image as test pattern image set.
(2) training plan image set is carried out to piecemeal, composing training sub-image collection
It is m * n that training image is concentrated the size of image, m is the line number of image array, n is the columns of image array, every image of training plan image set is divided into N rectangle sub-image, N=R * Q, R is row divided block number, and Q is row divided block number, and the training sub-image set representations of i sub-block image construction in the N of an every image sub-image is subset i, i=1 wherein, 2 ..., N, trains number that sub-image integrates image as M, and training sub-image concentrates the size of image to be training sub-image concentrates the pixel number of image to be
Figure BDA0000454825080000032
below N training sub-image collection is referred to as to training sub-image collection.
(3) respectively the pixel gray-scale value variance of calculation training image set and with the pixel gray-scale value variance of training sub-image collection and
Calculate subset allin all images belong to the variance of the pixel gray-scale value of same position, obtain the variance of the pixel gray-scale value of m * n position, and it sued for peace, be expressed as pixel gray-scale value variance and the σ of training plan image set all; Calculate i training sub-image collection subset iin all images belong to the variance of the pixel gray-scale value of same position, obtain
Figure BDA0000454825080000033
the variance of individual pixel gray-scale value, and it is sued for peace, be expressed as pixel gray-scale value variance and the σ of training sub-image collection i.
(4) compare pixel gray-scale value variance and the σ of training plan image set allpixel gray-scale value variance and σ with training sub-image collection i, and ask for the feature best projection matrix of training sub-image collection
If σ i< σ all, to i training sub-image, concentrate image to carry out Wavelet PCA conversion, obtain feature best projection matrix W i; If σ i>=σ all, to i training sub-image, concentrate image to carry out 2DPCA conversion, obtain feature best projection matrix W i.
(5) extract characteristics of image
(5.1) the pixel gray-scale value matrix of i sub-image of training plan image set j being opened to image is at subset ifeature best projection matrix W ion carry out projection, the matrix of usining after projection is as the feature of this sub-image, j=1 wherein, 2 ..., M;
(5.2) each image of test pattern image set is divided into N rectangle sub-block according to the method in step (2), by the pixel gray-scale value matrix of i sub-image of each image i definite feature best projection matrix W of training sub-image collection in step (4) ion carry out projection, the matrix of usining after projection is as the feature of i sub-image of this image.
(6) by identifying, verify feature extraction effect
(6.1) feature of i sub-image of each image of calculating test pattern image set and the Euclidean distance between the feature of i the sub-image that training plan image set j opens image.And result of calculation is normalized, be expressed as s ij, j=1 wherein, 2 ..., M, i=1,2 ..., N;
(6.2) distance after N the sub-block characteristics of image that calculates each image of test pattern image set opened the Euclidean distance normalization between N the sub-block characteristics of image that image is corresponding with the j of training plan image set and as similarity measurement S j, S j = &Sigma; i = 1 N s ij ;
(6.3) with nearest neighbor method, identify.M the similarity measurement S that each image of test pattern image set and training plan image set M are opened to image j, j=1,2 ..., M sorts, the image of current test pattern image set with work as S jduring for minimum value, the j of corresponding training plan image set opens image and belongs to same class;
(6.4) according to step (6.3), complete the identification to each image of test pattern image set, the test pattern of usining concentrates the ratio of the image number of correct identification and the number of all images of test pattern image set as recognition result, using recognition result as the criterion of feature extraction effect, and export recognition result.
The present invention is compared with the prior art, and tool has the following advantages:
The first, the present invention will use 2DPCA and Wavelet PCA to carry out self-adaptive feature extraction after image block, overcome the excessive problem that causes calculation of complex of conventional P CA matrix dimension after image array vectorization, and utilized the structural information of image, improved feature extraction effect.
The second, the present invention carries out after piecemeal image, the pixel gray-scale value variance of movement images integral body and with local pixel gray-scale value variance and, according to comparative result, select adaptively the feature extracting method of sub-image, make image characteristics extraction effect have improvement, improved the recognition effect of image.
Three, the present invention uses respectively 2DPCA and Wavelet PCA according to different situations in the sub-image of image, has made up the deficiency of only processing for a kind of image problem while utilizing a kind of method separately, has promoted the applicability of algorithm.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is that the ORL face database that uses of the present invention is at the first dividing condition the next man's facial image, wherein Fig. 2 (a) be 10 this persons in service first for all images of training plan image set, Fig. 2 (b) is that this time of this person is for all images of test pattern image set;
Fig. 3 is the inventive method and the simulation result figure of the feature extracting method based on PCA under ORL face database the first dividing condition;
Fig. 4 is that the ORL face database that uses of the present invention is at the second dividing condition the next man's facial image, wherein Fig. 4 (a) be 10 this persons in service first for all images of training plan image set, Fig. 4 (b) is that this time of this person is for all images of test pattern image set;
Fig. 5 is the inventive method and the simulation result figure of the feature extracting method based on PCA under ORL face database the second dividing condition;
Fig. 6 is that the COIL-20 Colombia image data base used of the present invention is at the image of the next object of the first dividing condition, wherein Fig. 6 (a) be 10 these objects in service first for all images of training plan image set, Fig. 6 (b) for this reason this time of object for all images of test pattern image set;
Fig. 7 is the inventive method and the simulation result figure of the feature extracting method based on PCA under COIL-20 database the first dividing condition;
Fig. 8 is that the COIL-20 Colombia image data base used of the present invention is at the image of the next object of the second dividing condition, wherein Fig. 8 (a) be 10 these objects in service first for all images of training plan image set, Fig. 8 (b) for this reason this time of object for all images of test pattern image set;
Fig. 9 is the inventive method and the simulation result figure of the feature extracting method based on PCA under COIL-20 database the second dividing condition;
Figure 10 is that the present invention infrared picture data storehouse of using is at the image of the next object of the first dividing condition, wherein Figure 10 (a) be 10 these objects in service first for all images of training plan image set, Figure 10 (b) for this reason this time of object for all images of test pattern image set;
Figure 11 is the inventive method and the simulation result figure of the feature extracting method based on PCA under the first dividing condition of infrared picture data storehouse;
Figure 12 is that the present invention infrared picture data storehouse of using is at the image of the next object of the second dividing condition, wherein Figure 12 (a) be 10 these objects in service first for all images of training plan image set, Figure 12 (b) for this reason this time of object for all images of test pattern image set;
Figure 13 is the inventive method and the simulation result figure of the feature extracting method based on PCA under the second dividing condition of infrared picture data storehouse.
Embodiment
Below in conjunction with 1 pair of specific embodiment of the invention step of accompanying drawing, be described in further detail.
Step 1. input picture collection has K to open image, K=Class * Pic, Class is the concentrated image category number of input picture, Pic is the number of each class image, in embodiments of the present invention, by WINDOWS XP system input picture collection image, the pixel gray-scale value of input picture is read according to matrix form.To each class image of input picture collection according to random division method, first produce Pic individual (0,1) equally distributed random number between, and carry out sequence number mark, then random number is arranged by ascending order, after sequence, the original sequence number of random number is Pic the integer random number that nothing repeats, and the image of front Pictrain each class of the corresponding input picture collection of integer random number is used as to training plan image set image, the total M of training image lump opens image, is expressed as subset all, wherein M=Class * Pictrain, remains the image of K-M the corresponding input picture collection of integer random number as test pattern image set image.
Step 2. is divided into N rectangle sub-image by every image of training plan image set, N=R * Q, and R is row divided block number, and Q is row divided block number, and i sub-block image construction training sub-image set representations of the N of an every image sub-image is subset i, i=1 wherein, 2 ..., N, the pixel number of training plan image set image is m * n, the line number that m is image array, the columns that n is image array, trains sub-image to concentrate the size of image to be
Figure BDA0000454825080000071
training sub-image concentrates the pixel number of image to be
Figure BDA0000454825080000072
the number that training sub-image integrates image is as M.Below N training sub-image collection is referred to as to training sub-image collection.
Step 3. is calculated subset alland subset ipixel gray-scale value variance and, be designated as respectively σ alland σ i,
&sigma; all = &Sigma; c all = 1 m &times; n &Sigma; j = 1 M ( p c all j - p &OverBar; c all ) 2
Wherein,
Figure BDA0000454825080000074
the c that j opens image allthe gray-scale value of individual pixel, 1≤c all≤ m * n, c allthe average of individual pixel gray-scale value, that is:
Figure BDA0000454825080000076
&sigma; i = &Sigma; c i = 1 m &times; n N &Sigma; j = 1 M ( p c i j - p &OverBar; c i ) 2
Wherein,
Figure BDA00004548250800000712
the c that j opens image ithe gray-scale value of individual pixel,
Figure BDA0000454825080000078
Figure BDA0000454825080000079
c ithe average of individual pixel gray-scale value, that is:
Figure BDA00004548250800000710
Step 4. is pixel gray-scale value variance and the σ of training plan image set relatively allpixel gray-scale value variance and σ with training sub-image collection i, and ask for the feature best projection matrix of training sub-image collection image:
(4.1) if σ i< σ all, to subset imiddle image uses Wavelet PCA to ask for feature best projection matrix W i;
(4.1.1) training sub-image collection image is carried out to two-dimensional discrete wavelet conversion;
All images to i training sub-image collection use two-dimensional discrete wavelet conversion to process, the image after being processed.
Image is two-dimensional matrix, and after 2-d discrete wavelet decomposes, image is broken down into 4 band regions at every turn, due to the general data that the low-frequency band after decomposing occupies image, and the decomposition of only calculating low-frequency band here, computing formula is as follows:
f ( x , y ) = &Sigma; k , l c k , l &phi; k , l ( x , y )
Wherein, f (x, y) is the image of i the training sub-image collection image after two-dimensional discrete wavelet conversion, φ k,l(x, y) is the scaling function in two-dimensional discrete wavelet conversion, and k and l are respectively the displacement signs of scaling function horizontal and vertical, c k,lfor low frequency coefficient.
(4.1.2) image after two-dimensional discrete wavelet conversion is carried out to PCA conversion, and ask for feature best projection matrix;
(4.1.2.1) all image slices vegetarian refreshments gray-scale value matrixes of i training sub-image collection process two-dimensional discrete wavelet conversion are carried out to vectorization, establish O ijthe image of i the sub-image that the j that represents training plan image set opens image after wavelet transformation, by O ijpixel gray-scale value matrix by column vector, join successively and line up vectorial ε ij, and will as i training sub-image collection matrix P after vectorization ij row.
(4.1.2.2) calculate i training sub-image collection warp-wise quantization matrix P icovariance matrix due to covariance matrix C ifor positive definite matrix, can carry out Eigenvalues Decomposition, to covariance matrix C icarry out Eigenvalues Decomposition:
C iC i Tη i=λ iη i
Wherein, λ iand η irepresent respectively C ieigenwert after Eigenvalues Decomposition and unit character vector, T is matrix transpose symbol, by η ifeature best projection matrix W as i training sub-image collection irow, constitutive characteristic best projection matrix W i=(η i1, η i2..., η it... η iopt), η wherein itfor eigenwert is carried out t the corresponding unit character vector of eigenwert after descending sort, 1≤t≤opt, opt is feature best projection matrix column number, opt≤M.
(4.2) if σ i>=σ all, to training sub-image collection subset imiddle image uses 2DPCA to ask for feature best projection matrix W i;
(4.2.1) 2DPCA is a kind of feature extracting method based on two-dimensional matrix, and the covariance matrix of i training sub-image collection is:
C i = 1 M &Sigma; j = 1 M ( A ij - A &OverBar; i ) T ( A ij - A &OverBar; i )
Wherein, A ijbe the pixel gray-scale value matrix that i the concentrated j of training sub-image opens image,
Figure BDA0000454825080000084
be the mean pixel point gray-scale value matrix of i all images of training sub-image collection,
Figure BDA0000454825080000085
(4.2.2) the best projection criterion of 2DPCA is:
J(X)=(X TC iX) max
Wherein, X is a column vector of the feature best projection matrix of i training sub-image collection, character pair best projection matrix column vector while meeting best projection criterion J (X).
Meeting under best projection criterion, feature best projection matrix column vector is covariance matrix C iunit character corresponding to eigenwert vector.
To covariance matrix C icarry out Eigenvalues Decomposition:
C iC i Tη i=λ iη i
Wherein, λ iand η irepresent respectively C ieigenwert after Eigenvalues Decomposition and unit character vector, the feature best projection matrix W of training sub-image collection i=(η i1, η i2..., η iopt).
Step 5. is extracted characteristics of image
(5.1), for training plan image set, with 10% to 100% of the picture number of training sub-image collection, step-length is 10%, as subset in step 4 respectively ifeature best projection matrix W icolumns, training plan image set j is opened to the pixel gray-scale value matrix of i sub-image of image at W ion carry out projection, the matrix of usining after projection is as the feature of this sub-image, feature I after extracting ijrepresent.When the feature best projection matrix of i training sub-image collection is while using WaveletPCA to ask for, i sub-block image array of training plan image set image carried out after vectorization and W icarry out matrix multiplication and extract feature, when the feature best projection matrix of i training sub-image collection is while using 2DPCA to ask for, the direct and W by i sub-block image array of training plan image set image icarry out matrix multiplication and extract feature.
(5.2) for test pattern image set, each image of test pattern image set is divided into N rectangle sub-block according to the method in step 2, respectively with train sub-image collection picture number 10% to 100%, step-length is 10%, as subset in step 4 ifeature best projection matrix W icolumns, i the feature best projection matrix W of training sub-image collection that the pixel gray-scale value matrix of i the sub-image of each image of test pattern image set is determined in step 4 ion carry out projection, the matrix of usining after projection is as the feature of i sub-image of this image, extracts the feature of each each sub-image of image of test pattern image set, feature V after extracting irepresent.When the feature best projection matrix of i training sub-image collection is while using Wavelet PCA to ask for, i sub-block image array of test pattern image set image carried out after vectorization and W icarry out matrix multiplication and extract feature, when the feature best projection matrix of i training sub-image collection is while using 2DPCA to ask for, the direct and W by i sub-block image array of test pattern image set image icarry out matrix multiplication and extract feature.
Step 6. is verified feature extraction effect by identifying
(6.1) feature of i sub-image of each image of calculating test pattern image set and the Euclidean distance between the feature of i the sub-image that training plan image set j opens image.And result of calculation is normalized, be expressed as s ij, j=1 wherein, 2 ..., M, i=1,2 ..., N, Euclidean distance computation process is as follows:
d ij = ( I ij - V i ) 2
Wherein, d ijthe Euclidean distance that represents each feature of i sub-image of image of test pattern image set and the feature of i the sub-image that the j of training plan image set opens image, I ijthe feature that the j of expression training plan image set opens i sub-image of image, V ithe feature that represents i sub-image of each image of test pattern image set.
S ijrepresent the d after normalization ij, that is:
s ij = d ij &Sigma; j = 1 M d ij .
(6.2) the Euclidean distance sum after N the sub-block characteristics of image normalization that N sub-block characteristics of image of each image of calculating test pattern image set is corresponding with the image of training plan image set is as similarity measurement S j,
Figure BDA0000454825080000103
(6.3) with nearest neighbor method, identify.M the similarity measurement S that each image of test pattern image set and training plan image set M are opened to image j, j=1,2 ..., M sorts, the image of current test pattern image set with work as S jduring for minimum value, the j of corresponding training plan image set opens image and belongs to same class.
(6.4) according to step (6.3), complete the identification to each image of test pattern image set, the test pattern of usining concentrates the ratio of the image number of correct identification and the number of all images of test pattern image set as recognition result, using recognition result as the criterion of feature extraction effect, and export recognition result.
Below in conjunction with accompanying drawing 2 to 13 pairs of simulated effects of the present invention of accompanying drawing, be further described.
1. simulated conditions:
At CPU, be to use Matlab7.0a to carry out emulation in Core (TM) 21.86GHZ, internal memory 1G, WINDOWS XP system.
2. emulation content:
With the inventive method and existing feature extracting method based on PCA, respectively emulation has been carried out in ORL face database, COIL-20 Colombia image data base and infrared picture data storehouse, and relatively applied the recognition result after two kinds of methods.
3. the simulation experiment result:
3.1.ORL the simulation result of face database
With the inventive method and existing feature extraction algorithm based on PCA, respectively ORL face database is carried out to feature extraction and identification.ORL face database, is comprised of from a series of facial images of taking during year April in April, 1992 to 1994 Britain Camb Olivetti laboratory, has 40 all ages and classes, different sexes and not agnate object.Each object 10 width image amounts to 400 width gray level images and forms, and image background is black.Wherein people face part expression and details all change, and the image size of selecting is here 64 * 64.Image is divided into 16, and row and column is all divided into 4, and each sub-image size is 16 * 16.The equal independent operating of each experiment 10 times, and experimental identification result is averaged.
3.1.1ORL the simulation result that face database the first is divided
Using ORL face database as input picture collection, using 10 images that belong to same person face as a class, randomly draw that wherein 5 images are as training plan image set image, remaining 5 images are as test pattern image set image, training plan image set has 200 images, and test pattern image set has 200 images.
Figure 2 shows that the facial image of a people in ORL face database, wherein Fig. 2 (a) is that 10 this persons for the first time in service are for all images of training plan image set, Fig. 2 (b) is that this time of this person is for all images of test pattern image set, as can be seen from the figure, facial image espressiove in database, the variations such as face orientation and illumination.Fig. 3 is the recognition effect comparison diagram of the inventive method and the feature extracting method based on PCA, the quality of recognition effect has characterized the quality of feature extracting method, wherein, transverse axis is the number percent that in characteristic extraction procedure, feature best projection matrix column number used accounts for training plan image set total number of images, referred to as feature number percent, the longitudinal axis is recognition result, correctly detects people's face number of test pattern image set and the ratio of everyone face number of test pattern image set.As seen from Figure 3, the recognition result of the inventive method is obviously better than the recognition result of the feature extracting method based on PCA, and therefore, feature extracting method of the present invention is better than the feature extracting method based on PCA.
3.1.2ORL the simulation result that face database the second is divided
Using ORL face database as input picture collection, using 10 images that belong to same person face as a class, randomly draw that wherein 3 images are as training plan image set image, remaining 7 images are as test pattern image set image, training plan image set has 120 images, and test pattern image set has 280 images.
Figure 4 shows that ORL face database in a people's facial image, wherein Fig. 4 (a) is that 10 this persons for the first time in service are for all images of training plan image set, Fig. 4 (b) be this time of this person for all images of test pattern image set, Fig. 5 is the recognition effect comparison diagram of the inventive method and the feature extracting method based on PCA.Wherein, transverse axis is the number percent that in characteristic extraction procedure, feature best projection matrix column number used accounts for training plan image set total number of images, and referred to as feature number percent, the longitudinal axis is recognition result.As seen from Figure 5, when training image collection number of training is less, the recognition result of the inventive method is also better than the recognition result of the feature extracting method based on PCA, therefore, when training image collection number of training is less, the feature extraction effect of the inventive method is also better than the feature extraction effect of the feature extracting method based on PCA.
By the recognition result under two kinds of different demarcations of 3.1 pairs of ORL face databases, can be drawn, adopt the inventive method can effectively extract feature, and be adapted to the recognition of face in ORL face database, and obtained equally recognition effect well when training set sample is less.
3.2.COIL-20 the simulation result of Colombia's image data base
With the inventive method and existing feature extraction algorithm based on PCA, respectively COIL-20 Colombia image data base is carried out to feature extraction and identification.COIL-20 Colombia image data base is Columbia University Image Library (COIL-20) Colombia image laboratory data base, it comprises 20 kinds of objects, rotating 360 degrees in every kind of object level, every 5 degree, take a photo, therefore every kind of object has 72 width images, here select 24 width images under 15 ° of rotations of every kind of object, size is 64 * 64.Image is divided into 16, and row and column is 4, and each sub-image size is 16 * 16.The equal independent operating of each experiment 10 times, and recognition result is averaging.
3.2.1COIL-20 the simulation result that Colombia's image data base the first is divided
Using COIL-20 Colombia image data base as input picture collection, using 24 images that belong to same object as a class, randomly draw wherein 12 images as training plan image set image, remaining 12 images are as test pattern image set image, training plan image set has 240 images, and test pattern image set has 240 images.
Figure 6 shows that the image of an object in COIL-20 Colombia image data base, wherein Fig. 6 (a) is that 10 these objects for the first time in service are for all images of training plan image set, Fig. 6 (b) for this reason this time of object for all images of test pattern image set, as can be seen from the figure, in image, the variation of light is little, but because the equal rotating 360 degrees of object is taken, so the shape size of object has obvious variation.Fig. 7 is the recognition effect comparison diagram of the inventive method and the feature extracting method based on PCA, and the quality of recognition effect has characterized the quality of feature extracting method.Wherein, transverse axis is the number percent that in characteristic extraction procedure, feature best projection matrix column number used accounts for training plan image set total number of images, and referred to as feature number percent, the longitudinal axis is recognition result.As seen from Figure 7, the recognition result of the inventive method is at the recognition result that is in most cases better than the feature extracting method based on PCA of feature number percent value, and therefore, feature extracting method of the present invention is better than the feature extracting method based on PCA.
3.2.2COIL-20 the simulation result that Colombia's image data base the second is divided
Using COIL-20 Colombia image data base as input picture collection, using 24 images that belong to same object as a class, randomly draw wherein 3 images as training plan image set image, remain 21 images as test pattern image set image, training plan image set has 60 images, and test pattern image set has 420 images.
Figure 8 shows that the image of an object in COIL-20 Colombia image data base, wherein Fig. 8 (a) is that 10 these objects for the first time in service are for all images of training plan image set, Fig. 8 (b) for this reason this time of object for all images of test pattern image set, Fig. 9 is the result comparison diagram of the inventive method and the feature extracting method based on PCA, and the quality of recognition effect has characterized the quality of feature extracting method.Wherein, transverse axis is the number percent that in characteristic extraction procedure, feature best projection matrix column number used accounts for training plan image set total number of images, and referred to as feature number percent, the longitudinal axis is recognition result.As can be seen from Figure 9, the recognition result of the inventive method in the less situation of training plan image set sample is apparently higher than the recognition result of the feature extracting method based on PCA under square one, therefore, when training image collection number of training is less the feature extraction effect of the inventive method in the feature extraction effect that is in most cases also better than the feature extracting method based on PCA of feature number percent value.
By the recognition result under 3.2 Liang Zhongdui COIL-20 Colombia image data base different demarcation, can be drawn, adopt the inventive method can be effectively for the feature extraction of COIL-20 Colombia image data base object, and suitable at training sample and test sample book when feature extraction result is used for identifying, and all obtained good recognition effect under the less Small Sample Size of training sample.
3.3. the simulation result in infrared picture data storehouse
With the inventive method and existing feature extraction algorithm based on PCA, respectively feature extraction and identification are carried out in infrared picture data storehouse.Infrared picture data storehouse comprises 9 kinds of objects, and rotating 360 degrees in every kind of object level is selected 36 width images of every kind of object here, and size is 100 * 200.Image is divided into 8, and 2 of behaviors, classify 4 as, and each sub-image size is 50 * 50.The equal independent operating of each experiment 10 times, and recognition result is averaging.
3.3.1 the simulation result that infrared picture data storehouse the first is divided
Using infrared picture data storehouse as input picture collection, using 36 images that belong to same object as a class, randomly draw wherein 18 images as training plan image set image, remain 18 images as test pattern image set image, training plan image set has 162 images, and test pattern image set has 162 images.
Figure 10 shows that the image of an object in infrared picture data storehouse, wherein Figure 10 (a) is that 10 these objects for the first time in service are for all images of training plan image set, Figure 10 (b) for this reason this time of object for all images of test pattern image set, as can be seen from the figure, in infrared image storehouse, image has following characteristics, in image, the contrast of target and background is little, has obvious shape and size variation in image rotation process.Figure 11 is the recognition effect comparison diagram of the inventive method and the feature extracting method based on PCA, and the quality of recognition effect has characterized the quality of feature extracting method.Wherein, transverse axis is the number percent that in characteristic extraction procedure, feature best projection matrix column number used accounts for training plan image set total number of images, and referred to as feature number percent, the longitudinal axis is recognition result.As seen from Figure 11, the recognition result of the inventive method is better than the recognition result of the feature extracting method based on PCA, and therefore, feature extracting method of the present invention is better than the feature extracting method based on PCA.
3.3.2 the simulation result that infrared picture data storehouse the second is divided
Using infrared picture data storehouse as input picture collection, using 36 images that belong to same object as a class, randomly draw wherein 12 images as training plan image set image, remain 24 images as test pattern image set image, training plan image set has 108 images, and test pattern image set has 216 images.
Figure 12 shows that the image of an object in infrared picture data storehouse, wherein Figure 12 (a) is that 10 these objects for the first time in service are for all images of training plan image set, Figure 12 (b) for this reason this time of object for all images of test pattern image set, Figure 13 is the result of the inventive method and the feature extracting method based on PCA, and the quality of comparison diagram recognition effect has characterized the quality of feature extracting method.Wherein, transverse axis is the number percent that in characteristic extraction procedure, feature best projection matrix column number used accounts for training plan image set total number of images, and referred to as feature number percent, the longitudinal axis is recognition result.As seen from Figure 13, when the infrared picture data storehouse sample of the inventive method is less, recognition result is higher than the recognition result of the feature extracting method based on PCA under same case, therefore, feature extracting method of the present invention is better than the feature extracting method based on PCA.
By 3.3 two kinds, to the recognition result under the different demarcation of infrared picture data storehouse, can draw, adopt the inventive method can effectively be adapted to the feature extraction of the infrared target in infrared picture data storehouse, and suitable at training sample and test sample book when feature extraction result is used for identifying, and all obtained good recognition effect under the less Small Sample Size of training sample.
By above simulation result, can be drawn, adopt method of the present invention, can be well to facial image, subject image, infrared image carries out feature extraction, has effectively improved the recognition effect of image, and has good applicability.

Claims (10)

1. the self-adaptive feature extraction method based on image block, concrete steps are as follows:
(1) input picture collection, and random division is training plan image set and test pattern image set
If input picture collection has K to open image, K=Class * Pic, Class is the concentrated image category number of input picture, Pic is the number of each class image, the pixel gray-scale value of input picture collection image is read by matrix form, according to random division method, the M of input picture collection is opened to image as training plan image set, be expressed as subset all, M=Class * Pictrain, Pictrain is the number that each class image of input picture collection is used as training plan image set image, remaining K-M opens image as test pattern image set;
(2) training plan image set is carried out to piecemeal, composing training sub-image collection
It is m * n that training image is concentrated the size of image, m is the line number of image array, n is the columns of image array, every image of training plan image set is divided into N rectangle sub-image, N=R * Q, wherein R is row divided block number, Q is row divided block number, i sub-block image sets in the N of an every image sub-image becomes i training sub-image collection, is expressed as subset i, i=1 wherein, 2 ..., N, trains picture number that sub-image integrates as M, and training sub-image concentrates the size of image to be
Figure FDA0000454825070000011
training sub-image concentrates the pixel number of image to be
Figure FDA0000454825070000012
below N training sub-image collection is referred to as to training sub-image collection;
(3) respectively the pixel gray-scale value variance of calculation training image set and with the pixel gray-scale value variance of training sub-image collection and
Calculate subset allin all images belong to the variance of the pixel gray-scale value of same position, obtain the variance of m * n position pixel gray-scale value, and it sued for peace, be expressed as pixel gray-scale value variance and the σ of training plan image set all; Calculate i training sub-image collection subset iin all images belong to the variance of the pixel gray-scale value of same position, obtain
Figure FDA0000454825070000013
the variance of individual pixel gray-scale value, and it is sued for peace, be expressed as pixel gray-scale value variance and the σ of training sub-image collection i;
(4) compare pixel gray-scale value variance and the σ of training plan image set allpixel gray-scale value variance and σ with training sub-image collection i, ask for the feature best projection matrix of training sub-image collection:
(4.1) if σ i< σ all, to i training sub-image collection subset iin image with WaveletPCA, ask for feature best projection matrix W i, step is as follows:
(4.1.1) to i training sub-image collection subset imiddle image carries out two-dimensional discrete wavelet conversion;
(4.1.2) image after two-dimensional discrete wavelet conversion is carried out to PCA conversion, ask for feature best projection matrix W i;
(4.2) if σ i>=σ all, to i training sub-image collection subset iin image with 2DPCA, ask for feature best projection matrix W i;
(5) extract characteristics of image
(5.1), for training plan image set, the pixel gray-scale value matrix that j is opened to i sub-image in image is at subset ifeature best projection matrix W ion carry out projection, the matrix of usining after projection is as the feature I of this sub-image ij, j=1 wherein, 2 ..., M, completes the feature extraction of training plan image set;
(5.2) for test pattern image set, first each image of test pattern image set is carried out to piecemeal according to step (2), be divided into N rectangle sub-image, then by the pixel gray-scale value matrix of i sub-image of each image, according to the subset obtaining in step (4) ifeature best projection matrix carry out projection, the matrix of usining after projection is as the feature V of i sub-image of this image i, complete the feature extraction of test pattern image set;
(6) by identifying, verify feature extraction effect
(6.1) calculate the feature of each sub-image of every image of test pattern image set and the Euclidean distance between the feature of the corresponding sub-image of all images of training plan image set, and be normalized;
(6.2) calculate the feature of N sub-image of every image of test pattern image set and the Euclidean distance normalization of the feature of the corresponding sub-image of all images of training plan image set after sum as similarity measurement S j;
(6.3) with nearest neighbor method, similarity measurement is adjudicated, complete the identification to image;
(6.4) according to step (6.3), every image of test pattern image set is identified, concentrated the ratio of the image number of correct identification and the number of all images of test pattern image set as recognition result test pattern, output recognition result.
2. the self-adaptive feature extraction method based on image block according to claim 1, is characterized in that, the computation process of the input picture collection random division method in described step (1) is as follows:
First produce equally distributed random number between Pic individual 0 to 1, random number is carried out to sequence number mark, then random number is arranged by ascending order, after sequence, the original sequence number of random number is Pic the integer random number that nothing repeats, using the image of front Pictrain each class of the corresponding input picture collection of integer random number as training plan image set image, using the image of Pic-Pictrain each class of the corresponding input picture collection of integer random number of residue as test pattern image set image.
3. the self-adaptive feature extraction method based on image block according to claim 1, is characterized in that, the training plan image set subset in described step (3) allwith i training sub-image collection subset ipixel gray-scale value variance and be calculated as follows:
&sigma; all = &Sigma; c all = 1 m &times; n &Sigma; j = 1 M ( p c all j - p &OverBar; c all ) 2
Wherein,
Figure FDA0000454825070000032
the c that j opens image allthe gray-scale value of individual pixel, 1≤c all≤ m * n,
Figure FDA0000454825070000033
c allthe average of individual pixel gray-scale value, that is:
&sigma; i = &Sigma; c i = 1 m &times; n N &Sigma; j = 1 M ( p c i j - p &OverBar; c i ) 2
Wherein,
Figure FDA00004548250700000311
the c that j opens image ithe gray-scale value of individual pixel,
Figure FDA0000454825070000036
Figure FDA0000454825070000037
c ithe average of individual pixel gray-scale value, that is:
Figure FDA0000454825070000038
4. the self-adaptive feature extraction method based on image block according to claim 1, is characterized in that, the described image to i training sub-image collection of step (4.1.1) carries out being calculated as follows of two-dimensional discrete wavelet conversion:
The image of i training sub-image collection is two-dimensional matrix, and size is
Figure FDA0000454825070000039
after adopting 2-d discrete wavelet to convert image, image is broken down into 4 frequency band parts, and therefore the general data that low frequency part comprises image carries out only retaining low frequency part after two-dimensional discrete wavelet conversion to image, and computing formula is as follows:
f ( x , y ) = &Sigma; k , l c k , l &phi; k , l ( x , y )
Wherein, f (x, y) is the image of i the training sub-image collection image after two-dimensional discrete wavelet conversion, φ k,l(x, y) is the scaling function in two-dimensional discrete wavelet conversion, and k and l are respectively the displacement signs of scaling function horizontal and vertical, c k,lfor low frequency coefficient.
5. the self-adaptive feature extraction method based on image block according to claim 1, it is characterized in that, what step (4.1.2) was described carries out PCA conversion to the image after two-dimensional discrete wavelet conversion, and the computation process of asking for feature best projection matrix is as follows:
All image slices vegetarian refreshments gray-scale value matrixes of 5.1 pairs i training sub-image collection carry out vectorization, and pixel gray-scale value matrix is joined successively and lines up vectorial ε by column vector ij, and will
Figure FDA0000454825070000041
as i training sub-image collection matrix P after vectorization ij row;
5.2 calculate i training sub-image collection matrix P after vectorization icovariance matrix
Figure FDA0000454825070000042
and to covariance matrix C icarry out Eigenvalues Decomposition:
C iC i Tη iin i
Wherein, λ iand η irepresent respectively C ieigenwert after Eigenvalues Decomposition and unit character vector, T is matrix transpose symbol, by η ifeature best projection matrix W as i training sub-image collection irow, constitutive characteristic best projection matrix.
6. the self-adaptive feature extraction method based on image block according to claim 1, is characterized in that, the computation process that the described image utilization 2DPCA to training sub-image collection of step (4.2) asks for feature best projection matrix is as follows:
6.1 calculate the covariance matrix of i training sub-image collection:
C i = 1 M &Sigma; j = 1 M ( A ij - A &OverBar; i ) T ( A ij - A &OverBar; i )
Wherein, A ijbe the pixel gray-scale value matrix that i the concentrated j of training sub-image opens image,
Figure FDA0000454825070000045
be the mean pixel point gray-scale value matrix of i all images of training sub-image collection,
Figure FDA0000454825070000046
6.2 couples of covariance matrix C icarry out Eigenvalues Decomposition:
C iC i Tη i=λ iη i
Wherein, λ iand η irepresent respectively C ieigenwert after Eigenvalues Decomposition and unit character vector, by η ifeature best projection matrix W as i training sub-image collection irow, constitutive characteristic best projection matrix.
7. the self-adaptive feature extraction method based on image block according to claim 1, is characterized in that, in the described extraction characteristics of image of step (5), sub-image is in feature best projection matrix W ion carry out being calculated as follows of projection:
With 10% to 100% of the picture number M of training sub-image collection, step-length is 10%, as feature best projection matrix W respectively icolumns, to the sub-image of the sub-image of training plan image set and test pattern image set in feature best projection matrix W ion carry out projection and extract feature, when the feature best projection matrix of i training sub-image collection is while using WaveletPCA to ask for, i sub-block image array of i sub-block image array of training plan image set image and test pattern image set image carried out respectively after vectorization and W icarry out matrix multiplication and extract feature, when the feature best projection matrix of i training sub-image collection is use 2DPCA while asking for, by i sub-block image array of i sub-block image array of training plan image set image and test pattern image set image respectively with W icarry out matrix multiplication and extract feature.
8. the self-adaptive feature extraction method based on image block according to claim 1, it is characterized in that, the feature of each sub-image and the normalization Euclidean distance computation process between the feature of the corresponding sub-image of all images of training plan image set of each image of test pattern image set that step (6.1) is described are as follows:
d ij = ( I ij - V i ) 2
Wherein, d ijrepresent the Euclidean distance between each feature of i sub-image of image of test pattern image set and the feature of i the sub-image that the j of training plan image set opens image, I ijthe feature that the j of expression training plan image set opens i sub-image of image, V ithe feature that represents i sub-image of each image of test pattern image set;
S ijrepresent the d after normalization ij, that is:
s ij = d ij &Sigma; j = 1 M d ij .
9. the self-adaptive feature extraction method based on image block according to claim 1, is characterized in that, the similarity measurement S that step (6.2) is described jbe calculated as follows:
S j = &Sigma; i = 1 N s ij .
10. the self-adaptive feature extraction method based on image block according to claim 1, is characterized in that, what the described nearest neighbor method of step (6.3) was adjudicated similarity measurement is calculated as follows:
M the similarity measurement S that each image of test pattern image set and training plan image set M are opened to image j, j=1,2 ..., M sorts, current test pattern with work as S jduring for minimum value, the j of corresponding training plan image set opens image and belongs to same class.
CN201410010605.9A 2014-01-09 2014-01-09 Self-adaptation feature extracting method based on image partitioning Active CN103714340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410010605.9A CN103714340B (en) 2014-01-09 2014-01-09 Self-adaptation feature extracting method based on image partitioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410010605.9A CN103714340B (en) 2014-01-09 2014-01-09 Self-adaptation feature extracting method based on image partitioning

Publications (2)

Publication Number Publication Date
CN103714340A true CN103714340A (en) 2014-04-09
CN103714340B CN103714340B (en) 2017-01-25

Family

ID=50407297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410010605.9A Active CN103714340B (en) 2014-01-09 2014-01-09 Self-adaptation feature extracting method based on image partitioning

Country Status (1)

Country Link
CN (1) CN103714340B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046224A (en) * 2015-07-16 2015-11-11 东华大学 Block self-adaptive weighted histogram of orientation gradient feature based face recognition method
CN105391566A (en) * 2014-09-04 2016-03-09 中国移动通信集团黑龙江有限公司 Dynamic network equipment configuration comparison method and device
CN105551036A (en) * 2015-12-10 2016-05-04 中国科学院深圳先进技术研究院 Training method and device for deep learning network
CN108629350A (en) * 2017-03-15 2018-10-09 华为技术有限公司 The method and device of similarity relation between a kind of identification picture
CN108681721A (en) * 2018-05-22 2018-10-19 山东师范大学 Face identification method based on the linear correlation combiner of image segmentation two dimension bi-directional data
CN113505691A (en) * 2021-07-09 2021-10-15 中国矿业大学(北京) Coal rock identification method and identification reliability indication method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020067857A1 (en) * 2000-12-04 2002-06-06 Hartmann Alexander J. System and method for classification of images and videos
US20130028517A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd. Apparatus, method, and medium detecting object pose
CN103049897A (en) * 2013-01-24 2013-04-17 武汉大学 Adaptive training library-based block domain face super-resolution reconstruction method
CN103345758A (en) * 2013-07-25 2013-10-09 南京邮电大学 Joint photographic experts group (JPEG) image region copying and tampering blind detection method based on discrete cosine transformation (DCT) statistical features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020067857A1 (en) * 2000-12-04 2002-06-06 Hartmann Alexander J. System and method for classification of images and videos
US20130028517A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd. Apparatus, method, and medium detecting object pose
CN103049897A (en) * 2013-01-24 2013-04-17 武汉大学 Adaptive training library-based block domain face super-resolution reconstruction method
CN103345758A (en) * 2013-07-25 2013-10-09 南京邮电大学 Joint photographic experts group (JPEG) image region copying and tampering blind detection method based on discrete cosine transformation (DCT) statistical features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AVINASH KUMAR .ETC: ""Face Recognition using facial symmetry"", 《PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE,ENGINEERING AND INFORMATION TECHNOLOGY》 *
DEEPAK KUMAR .ETC: ""Recognition of Kannada characters extracted from scene images"", 《PROCEEDING OF THE WORKSHOP ON DOCUMENT ANALYSIS AND RECOGNITION》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105391566A (en) * 2014-09-04 2016-03-09 中国移动通信集团黑龙江有限公司 Dynamic network equipment configuration comparison method and device
CN105391566B (en) * 2014-09-04 2018-12-07 中国移动通信集团黑龙江有限公司 A kind of method and device that dynamic network equipments configuration compares
CN105046224A (en) * 2015-07-16 2015-11-11 东华大学 Block self-adaptive weighted histogram of orientation gradient feature based face recognition method
CN105551036A (en) * 2015-12-10 2016-05-04 中国科学院深圳先进技术研究院 Training method and device for deep learning network
CN105551036B (en) * 2015-12-10 2019-10-08 中国科学院深圳先进技术研究院 A kind of training method and device of deep learning network
CN108629350A (en) * 2017-03-15 2018-10-09 华为技术有限公司 The method and device of similarity relation between a kind of identification picture
CN108629350B (en) * 2017-03-15 2021-08-20 华为技术有限公司 Method and device for identifying similarity relation between pictures
CN108681721A (en) * 2018-05-22 2018-10-19 山东师范大学 Face identification method based on the linear correlation combiner of image segmentation two dimension bi-directional data
CN113505691A (en) * 2021-07-09 2021-10-15 中国矿业大学(北京) Coal rock identification method and identification reliability indication method
CN113505691B (en) * 2021-07-09 2024-03-15 中国矿业大学(北京) Coal rock identification method and identification credibility indication method

Also Published As

Publication number Publication date
CN103714340B (en) 2017-01-25

Similar Documents

Publication Publication Date Title
CN106326886B (en) Finger vein image quality appraisal procedure based on convolutional neural networks
CN104318219B (en) The face identification method combined based on local feature and global characteristics
CN103198303B (en) A kind of gender identification method based on facial image
CN106228142A (en) Face verification method based on convolutional neural networks and Bayesian decision
CN105574534A (en) Significant object detection method based on sparse subspace clustering and low-order expression
CN106446754A (en) Image identification method, metric learning method, image source identification method and devices
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
Liu et al. Online RGB-D person re-identification based on metric model update
CN104298981A (en) Face microexpression recognition method
CN105404886A (en) Feature model generating method and feature model generating device
CN110781766B (en) Grassman manifold discriminant analysis image recognition method based on characteristic spectrum regularization
CN108564040B (en) Fingerprint activity detection method based on deep convolution characteristics
CN102346851B (en) Image segmentation method based on NJW (Ng-Jordan-Weiss) spectral clustering mark
CN105117708A (en) Facial expression recognition method and apparatus
CN101968850A (en) Method for extracting face feature by simulating biological vision mechanism
CN104700089A (en) Face identification method based on Gabor wavelet and SB2DLPP
CN102214299A (en) Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
CN103839042A (en) Human face recognition method and human face recognition system
CN109255339B (en) Classification method based on self-adaptive deep forest human gait energy map
CN105893941B (en) A kind of facial expression recognizing method based on area image
Dong et al. Feature extraction through contourlet subband clustering for texture classification
CN107194314A (en) The fuzzy 2DPCA and fuzzy 2DLDA of fusion face identification method
CN104966075A (en) Face recognition method and system based on two-dimensional discriminant features
CN112733665A (en) Face recognition method and system based on lightweight network structure design

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant