CN105447468A - Color image over-complete block feature extraction method - Google Patents

Color image over-complete block feature extraction method Download PDF

Info

Publication number
CN105447468A
CN105447468A CN201510865923.8A CN201510865923A CN105447468A CN 105447468 A CN105447468 A CN 105447468A CN 201510865923 A CN201510865923 A CN 201510865923A CN 105447468 A CN105447468 A CN 105447468A
Authority
CN
China
Prior art keywords
matrix
projection
image
vector
eigenmatrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510865923.8A
Other languages
Chinese (zh)
Other versions
CN105447468B (en
Inventor
黄可望
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanzhi Qianhong Technology Co., Ltd
Original Assignee
Wuxi Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Institute of Technology filed Critical Wuxi Institute of Technology
Priority to CN201510865923.8A priority Critical patent/CN105447468B/en
Publication of CN105447468A publication Critical patent/CN105447468A/en
Application granted granted Critical
Publication of CN105447468B publication Critical patent/CN105447468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention relates to the important fields of artificial intelligence and mode identification, namely, image identification and human face identification, in particular to a color image over-complete block feature extraction method based on a non-iterative bilateral two-dimensional principal component analysis (NIB2DPCA) method. The method comprises: performing over-complete block segmentation on a color image; then performing feature extraction and reconstruction on a sub-image module with the NIB2DPCA method from R, G and B channels; and performing multi-module fusion to finally obtain a classification feature matrix. According to the method, the amount of extracted information is far greater than that of an original image, so that the identification rate of the color image is increased. The method has higher identification accuracy and higher identification speed. The method is applied to the field of human face identification, and the identification speed of the method is increased by a few orders of magnitudes in comparison with that of an over-complete block segmentation based human face identification method for deeply hidden identity characteristics.

Description

Coloured image crosses complete blocking characteristic abstracting method
Technical field
Coloured image of the present invention is crossed complete blocking characteristic abstracting method and is related to key areas in artificial intelligence and pattern-recognition and image recognition and recognition of face; Especially cross a complete blocking characteristic abstracting method based on the coloured image without iteration bilateral two-dimensional principal component analysis method NIB2DPCA, be applicable to recognition of face, image recognition.
Background technology
Image recognition technology is key areas in artificial intelligence and pattern-recognition and study hotspot, and its application prospect is extensive, theory value is high.In reality, coloured image is that image recognition provides abundant color information.The performance of algorithm is improved so more and more study to be used to chromatic information.
In image recognition, recognition of face is again a study hotspot; In face recognition process, a most important step is exactly feature extraction, and principal component analysis (PCA) (PrincipalComponentAnalysis, PCA) is one of algorithm of classics in mode identification technology.Principal component analysis (PCA) is applied in the feature extraction of facial image, and has become and be used in the most classical and effective feature extracting method in facial image feature extraction.In order to avoid image array being converted to image vector before feature extraction.Two-dimensional principal component analysis method (TwoDimensionalPCA, 2DPCA) has been suggested, and compared with PCA, it is easier that 2DPCA calculates scatter matrix, and more accurately, the speed calculating corresponding latent vector is faster, and the overall discrimination of 2DPCA is higher than PCA.But 2DPCA only extracts feature, so the feature that method extracts has limitation from row or column direction of two dimensional image matrix.Bilateral 2DPCA method (BilateralPCA, B2DPCA), the method extracts feature from the row and column both direction of two dimensional image matrix simultaneously, so the characteristic information that it extracts is more abundant, and B2DPCA has better compressibility and efficiency than 2DPCA, but the premultiplication projection matrix of B2DPCA and the right side take advantage of projection matrix all to be obtained by iterative computation, so it is consuming time relatively long to calculate these two matrixes.Without iteration B2DPCA method (Non-IterationBilateralprojectionbased2DPCA, NIB2DPCA), because the premultiplication projection matrix of the method takes advantage of projection matrix employing without the computing method of iteration with right, therefore the time of feature extraction shortens greatly.Above-mentioned Feature Extraction Method is all based on two dimensional image matrix, and its application is confined to gray level image more.In order to the identification realizing coloured image is suggested through a series of development coloured image NIB2DPCA method.
Simultaneously along with modular algorithm development, the two-way 2DPCA algorithm of Modular PCA algorithm, piecemeal independent component analysis method, module 2DPCA algorithm, piecemeal is put forward to carry out recognition of face one by one.2013 factorization principal component analytical method (M-FPCA) be suggested.Original digital image sample is carried out modularization by the method, adopts FPCA algorithm to carry out feature extraction to each subimage matrix obtained after modularization, merges the eigenmatrix that subimage eigenmatrix obtains former figure.Meanwhile, according to the feature that coloured image can be represented by R, G, B3 component, in conjunction with M-FPCA algorithm, a kind of color M-FPCA method is proposed.
In addition, the face identification method of dark hiding identity characteristic in 2014 is suggested in international conference, and the method extracts feature from the neuron degree of depth convolutional network (ConvNets), proposes complete new concept first in literary composition.
Although above-mentioned coloured image NIB2DPCA method and color M-FPCA method increase significantly at the discrimination of field of image recognition particularly recognition of face, along with the further raising of identification requirement, more advanced recognition methods is needed to improve discrimination.Although the above-mentioned dark face identification method hiding identity characteristic has higher discrimination still higher to calculation requirement, calculated amount is larger, the sample training time is long, need in most cases to identify fast now, particularly for current various mobile device, the method is just consuming time long, and practicality is not fine.
Summary of the invention
The object of the invention is to provide a kind of coloured image for the weak point of said method and cross complete blocking characteristic abstracting method, image recognition rate can be improved significantly, reduce consuming time, improve recognition speed, have more advantage in application aspect.
Coloured image is crossed complete blocking characteristic abstracting method and is taked following technical scheme to realize:
Coloured image crosses complete blocking characteristic abstracting method, comprises the steps:
S1, color catalog image is carried out piecemeal;
Adopted complete macroblock mode, image is divided into multiple modules not of uniform size, has the part overlapped between module with module, the image that all block combiner get up is greater than original image;
The summation of the complete macroblock mode of described mistake and piecemeal is greater than original image;
S2, according to rgb color space, channel information decomposition is carried out to the subimage module obtained in step S1, obtain the pseudo-gray level image matrix of R, G, B tri-of subimage module;
S3, according to the pseudo-gray level image matrix of R, G, B of obtaining in step S2 tri-, with based on without iteration bilateral two-dimensional principal component analysis method and NIB2DPCA method, projection matrix and premultiplication projection matrix are taken advantage of to the pseudo-gray level image Matrix Calculating right side of R, G, B tri-respectively;
S4, respectively to the pseudo-gray level image matrix of R, G, B of obtaining in step S2, take advantage of projection matrix and premultiplication projection matrix to carry out bilateral projection with the right side obtained in step S3, obtain the eigenmatrix of three channels;
S5, the eigenmatrix of three channels obtained in step S4 to be merged, obtain new two-dimensional matrix;
S6, the submodule image that step S1 obtains is processed one by one according to step S2-S5 after merge, thus obtain the two-dimensional matrix of complete image;
S7, regard the two-dimensional matrix of the complete image obtained in step S6 as new original two-dimensional image array, projection matrix and premultiplication projection matrix are taken advantage of in the right side of the original two-dimensional image array again asking this new, finally try to achieve eigenmatrix, and try to achieve the eigenmatrix of each width training sample with reference to the method;
S8, try to achieve the eigenmatrix of test sample book according to the same method of step S2 ~ S7, and the eigenmatrix of the eigenmatrix of the test sample book obtained and each secondary training sample is compared classification, close just point at same class, expression identifies.
The concrete grammar that coloured image crosses complete blocking characteristic abstracting method is:
In step sl, piecemeal is supposed to treat color catalog image is A i(1≤i≤N), N represents the number of all training samples, and m represents the line number of color catalog matrix, and n represents the columns of color catalog;
Make X=A i,
Wherein q represents columns, and p represents line number, and X is divided into individual submodule, X ij(1≤i≤m, 1≤j≤n) is that colored subimage module of the i-th row jth row in individual submodule, by X ijbe converted into (1≤ ≤ m, 1≤ ≤ n) colored subimage matrix;
In step s 2, it is the subimage module X obtained step S1 according to rgb color space that described channel information decomposes ijcarry out channel information decomposition, obtain the pseudo-gray level image matrix of R, G, B tri-;
Described is that gray level image matrix representation is as follows:
(S2-1)
(S2-2)
(S2-3)
The X obtained through step S1 ij(1≤i≤m, 1≤j≤n) is divided into 3 channels, namely , , , wherein variable i, j, m, n, , consistent with step S1;
In step s3, described NIB2DPCA method is specially,
Setting training sample number is M, matrix U represent premultiplication projection matrix, matrix V represent that projection matrix is taken advantage of on the right side, A i(1≤i≤m) is image array, to A icarry out bilateral projection, obtain eigenmatrix Y i(1≤i≤m) namely:
(1)
The line number of m representing matrix U, the line number of n representing matrix V, the columns of representing matrix U, the columns of r representing matrix V.
In order to obtain premultiplication projection matrix and projection matrix is taken advantage of on the right side, use the mark of the covariance of projection properties matrix, i.e. the total population scatter matrix of projection properties matrix, its algorithmic formula is
(2)
Premultiplication projection matrix U is regarded as unit matrix, then formula (2) is converted into:
(3)
Equation (3) after conversion is maximization formula (3), and the vector maximizing formula (3) is that projection vector is taken advantage of on the right side, and then the right side takes advantage of projection vector matrix to be
(4)
If the above-mentioned right side takes advantage of front r eigenvalue of maximum characteristic of correspondence vector of projection vector matrix to be V 1, V 2... V r, each proper vector is a column data, and wherein, proper vector sum in projection vector matrix is taken advantage of on 1≤r≤right side, and described proper vector comprises the row feature of original image matrix, and therefore r proper vector just constitutes eigenmatrix, thus be the right side and take advantage of projection matrix;
Described V is regarded as unit matrix, then formula (2) is converted into:
(5)
The vector of maximization formula (5) is called premultiplication projection vector; And then premultiplication projection vector matrix is
(6)
If front eigenvalue of maximum characteristic of correspondence vector of above-mentioned premultiplication projection vector matrix is , each proper vector is a column data, wherein, 1≤ proper vector sum in≤premultiplication projection vector matrix, described proper vector comprises original image matrix column feature, therefore individual proper vector just constitutes eigenmatrix, thus be premultiplication projection matrix;
Projection matrix premultiplication projection matrix obtained above and the right side is taken advantage of to substitute into formula , obtain the eigenmatrix of NIB2DPCA.
According to said method, projection vector matrix is taken advantage of on the right side according to the pseudo-gray level image matrix NIB2DPCA method of R channel, tries to achieve:
(S3-1)
If the above-mentioned right side takes advantage of front r eigenvalue of maximum characteristic of correspondence vector of projection vector matrix to be , ..., , each proper vector is a column data, and wherein, proper vector sum in projection vector matrix is taken advantage of on 1≤r≤right side, and described proper vector comprises the row feature of original image matrix, and therefore r proper vector just constitutes eigenmatrix, thus be the right side take advantage of projection matrix;
According to the premultiplication projection vector matrix of NIB2DPCA method, ask:
(S3-2)
If before above-mentioned premultiplication projection vector matrix individual eigenvalue of maximum characteristic of correspondence vector is , each proper vector is a column data, wherein, 1≤ proper vector sum in≤premultiplication projection vector matrix, described proper vector comprises original image matrix column feature, and therefore r proper vector just constitutes eigenmatrix, thus be premultiplication projection matrix;
In like manner, obtain according to the pseudo-gray level image matrix NIB2DPCA method of G channel the right side take advantage of projection matrix and premultiplication projection matrix to be respectively with ;
In like manner, obtain according to the pseudo-gray level image matrix NIB2DPCA method of B channel the right side take advantage of projection matrix and premultiplication projection matrix to be respectively with ;
The transposition of the T representing matrix described in step S3.
In step s 4 which, to the pseudo-gray level image matrix of R, G, B tri-that step S2 obtains , , carry out bilateral projection, the eigenmatrix obtaining three channels is as follows respectively:
(S4-1)
(S4-2)
(S4-3)
The transposition of the T representing matrix described in step S4.
In step s 5, the eigenmatrix of three channels that step S4 is obtained merges, and obtains new two-dimensional matrix and is expressed as:
(S5-1)
Wherein vec (*) represents the vectorization to matrix *;
The transposition of the T representing matrix described in step S5.
In step s 6, the two-dimensional matrix obtaining complete image is,
(S6-1)
In the step s 7, finally eigenmatrix is tried to achieve
(S7-1)
To each width training sample A ithe eigenmatrix of (1≤i≤N) is Y i
(S4-2)
The transposition of the T representing matrix described in step S7;
In step s 8, suppose there be N1 test sample book, the eigenmatrix Y that each test sample book is tried to achieve k(1≤i≤N1)) and Y icompare classification.
Coloured image based on NIB2DPCA provided by the invention crosses complete blocking characteristic abstracting method.This method of putting mainly adopted complete new concept to carry out wound new image piecemeal again, carried out feature extraction and carried out multimode fusion, obtain eigenmatrix in conjunction with up-to-date coloured image NIB2DPCA method to each coloured image submodule.Due to the piecemeal shuffling information amount of original color image much larger than original image, therefore method provided by the invention can obtain abundanter coloured image characteristic information, considerably improves the discrimination of coloured image.The present invention can be applicable to field of face identification, has enriched the expression of facial image, has improve the accuracy rate of recognition of face.Method provided by the invention does not increase consuming time while raising discrimination, method provided by the invention is the same order of magnitude in sample training time and coloured image NIB2DPCA method and color M-FPCA method, but, reduce by several orders of magnitude than the dark face identification method hiding identity characteristic, application is stronger.
Accompanying drawing explanation
Below with reference to accompanying drawing, the invention will be further described:
Fig. 1 is that the coloured image based on NIB2DPCA of the present invention crosses complete blocking characteristic extraction schematic flow sheet;
Fig. 2 is CVL colored human face storehouse 2 people 3 width sample image (81 81);
Fig. 3 is FEI colored human face storehouse 2 people 2 width sample image (81 81);
Fig. 4 is that original image crosses complete block diagram.
Embodiment
With reference to accompanying drawing 1 ~ 4, technical scheme of the present invention is described in detail.
The coloured image that the present invention is based on NIB2DPCA is crossed complete blocking characteristic abstracting method and is comprised the steps:
Coloured image crosses complete blocking characteristic abstracting method, comprises the steps:
S1, color catalog image is carried out piecemeal;
Adopted complete macroblock mode, image is divided into multiple modules not of uniform size, has the part overlapped between module with module, the image that all block combiner get up is greater than original image;
The summation of the complete macroblock mode of described mistake and piecemeal is greater than original image;
Suppose to treat piecemeal color catalog image is A i(1≤i≤N), N represents the number of all training samples, and m represents the line number of color catalog matrix, and n represents the columns of color catalog;
Make X=A i,
Wherein q represents columns, and p represents line number, and X is divided into individual submodule, X ij(1≤i≤m, 1≤j≤n) is that colored subimage module of the i-th row jth row in individual submodule, by X ijbe converted into (1≤ ≤ m, 1≤ ≤ n) colored subimage matrix.
S2, according to rgb color space, channel information decomposition is carried out to the subimage module obtained in step S1;
It is the subimage module X obtained step S1 according to rgb color space that described channel information decomposes ijcarry out channel information decomposition, obtain the pseudo-gray level image matrix of R, G, B tri-;
Described is that gray level image matrix representation is as follows:
(S2-1)
(S2-2)
(S2-3)
The X obtained through step S1 ij(1≤i≤m, 1≤j≤n) is divided into 3 channels, namely , , , wherein variable i, j, m, n, , consistent with step S1.
S3, according to the pseudo-gray level image matrix of R, G, B of obtaining in step S2 tri-, with based on without iteration bilateral two-dimensional principal component analysis method and NIB2DPCA method, projection matrix and premultiplication projection matrix are taken advantage of to the pseudo-gray level image Matrix Calculating right side of R, G, B tri-respectively;
Described NIB2DPCA method is specially,
Setting training sample number is M, matrix U represent premultiplication projection matrix, matrix V represent that projection matrix is taken advantage of on the right side, A i(1≤i≤m) is image array, to A icarry out bilateral projection, obtain eigenmatrix Y i(1≤i≤m) namely:
(1)
The line number of m representing matrix U, the line number of n representing matrix V, the columns of representing matrix U, the columns of r representing matrix V.
In order to obtain premultiplication projection matrix and projection matrix is taken advantage of on the right side, use the mark of the covariance of projection properties matrix, i.e. the total population scatter matrix of projection properties matrix, its algorithmic formula is
(2)
Premultiplication projection matrix U is regarded as unit matrix, then formula (2) is converted into:
(3)
Equation (3) after conversion is maximization formula (3), and the vector maximizing formula (3) is that projection vector is taken advantage of on the right side, and then the right side takes advantage of projection vector matrix to be
(4)
If the above-mentioned right side takes advantage of front r eigenvalue of maximum characteristic of correspondence vector of projection vector matrix to be V 1, V 2... V r, each proper vector is a column data, and wherein, proper vector sum in projection vector matrix is taken advantage of on 1≤r≤right side, and described proper vector comprises the row feature of original image matrix, and therefore r proper vector just constitutes eigenmatrix, thus be the right side and take advantage of projection matrix;
Described V is regarded as unit matrix, then formula (2) is converted into:
(5)
The vector of maximization formula (5) is called premultiplication projection vector; And then premultiplication projection vector matrix is
(6)
If before above-mentioned premultiplication projection vector matrix individual eigenvalue of maximum characteristic of correspondence vector is , each proper vector is a column data, wherein, 1≤ proper vector sum in≤premultiplication projection vector matrix, described proper vector comprises original image matrix column feature, therefore individual proper vector just constitutes eigenmatrix, thus be premultiplication projection matrix;
Projection matrix premultiplication projection matrix obtained above and the right side is taken advantage of to substitute into formula , obtain the eigenmatrix of NIB2DPCA.
According to said method, projection vector matrix is taken advantage of on the right side according to the pseudo-gray level image matrix NIB2DPCA method of R channel, tries to achieve:
(S3-1)
If the above-mentioned right side takes advantage of front r eigenvalue of maximum characteristic of correspondence vector of projection vector matrix to be , ..., , each proper vector is a column data, and wherein, proper vector sum in projection vector matrix is taken advantage of on 1≤r≤right side, and described proper vector comprises the row feature of original image matrix, and therefore r proper vector just constitutes eigenmatrix, thus be the right side take advantage of projection matrix;
According to the premultiplication projection vector matrix of NIB2DPCA method, ask:
(S3-2)
If before above-mentioned premultiplication projection vector matrix individual eigenvalue of maximum characteristic of correspondence vector is , each proper vector is a column data, wherein, 1≤ proper vector sum in≤premultiplication projection vector matrix, described proper vector comprises original image matrix column feature, and therefore r proper vector just constitutes eigenmatrix, thus be premultiplication projection matrix;
In like manner, obtain according to the pseudo-gray level image matrix NIB2DPCA method of G channel the right side take advantage of projection matrix and premultiplication projection matrix to be respectively with ;
In like manner, obtain according to the pseudo-gray level image matrix NIB2DPCA method of B channel the right side take advantage of projection matrix and premultiplication projection matrix to be respectively with .
S4, respectively bilateral projection is carried out to the pseudo-gray level image matrix of R, G, B of obtaining in step S2, obtain the eigenmatrix of three channels;
The pseudo-gray level image matrix of R, G, B that step S2 obtains tri- , , carry out bilateral projection, the eigenmatrix obtaining three channels is as follows respectively:
(S4-1)
(S4-2)
(S4-3)
The transposition of the T representing matrix described in step S4.
S5, the eigenmatrix of three channels obtained in step S4 to be merged, obtain new two-dimensional matrix;
(S5-1)
Wherein vec (*) represents the vectorization to matrix *.
S6, the submodule image that step S1 obtains is processed one by one according to step S2-S5 after merge, thus obtain the two-dimensional matrix of complete image;
(S6-1)
S7, regard the two-dimensional matrix of the complete image obtained in step S6 as new original two-dimensional image array, projection matrix and premultiplication projection matrix are taken advantage of in the right side of the original two-dimensional image array again asking this new, finally try to achieve eigenmatrix, and try to achieve the eigenmatrix of each width training sample with reference to the method;
Eigenmatrix for
(S7-1)
To each width training sample A ithe eigenmatrix of (1≤i≤N) is Y i
(S4-2)
S8, try to achieve the eigenmatrix of test sample book according to the same method of step S2 ~ S7, and the eigenmatrix of the eigenmatrix of the test sample book obtained and each secondary training sample is compared classification, close just point at same class, expression identifies;
Suppose there be N1 test sample book, the eigenmatrix Y that each test sample book is tried to achieve k(1≤i≤N1)) and Y icompare classification.
In CVL and FEI colored human face storehouse, discrimination contrast is carried out to coloured image NIB2DPCA method, coloured image M-FPCA method and method provided by the invention below.
CVL colored human face storehouse comprises 114 people, the colored human face database of everyone 7 width image compositions.The facial expression of these images, facial detail and human face posture are all different, and image resolution ratio is 640 480.Select the sample of 110 people in experiment, and the 3 width front views choosing everyone are as experimental image, and are normalized to 81 after manual cutting 81, as shown in Figure 2.From 3 width images, randomly draw 1 as test sample book during experiment, all the other two as training sample.
FEI colored human face storehouse comprises 200 people, the colored human face database of everyone 14 width image constructions.All there are change to a certain degree at the countenance of these facial images, face's anglec of rotation and age, and the about 1:1 of M-F, image resolution ratio is 640 480.What adopt in experiment is the subset of FEI face database, this database by 200 people, everyone 2 width front views compositions, as shown in Figure 3.From everyone two images, randomly draw one as training sample in experiment, another Zhang Zuowei test sample book, such training sample and test sample book are 200, front manual cutting is carried out to image after be normalized to 81 81.
Method provided by the invention is carried out in identifying to coloured image, respectively block based on entire image, i.e. block one, in addition, employs again 2 respectively and 3 image blocks carry out experimental study.If for image block sum, then 3,4, piecemeal is as Fig. 4.
Experimental result compared with coloured image NIB2DPCA method, coloured image M-FPCA method respectively, comparing result is as follows.
Table 1 coloured image NIB2DPCA method is at the discrimination (%) in CVL and FEI colored human face storehouse
The precondition of the experimental result on FEI face database is that the row characteristic number of training sample feature extraction and row characteristic number are respectively l=5, r=40; The precondition of the experimental result on CVL face database is that the row characteristic number of training sample feature extraction and row characteristic number are respectively l=4, r=40; In table 1 with represent row characteristic number and the row characteristic number of test sample book feature extraction respectively.
Can see that from table 1 coloured image NIB2DPCA method is 91.5% at the highest discrimination in FEI colored human face storehouse, the highest discrimination on CVL colored human face storehouse is 88.18%.
Table 2 coloured image M-FPCA method is at the discrimination (%) in CVL and FEI colored human face storehouse
In table 2 with represent row characteristic number and the row characteristic number of the test sample book feature extraction that FPCA and expansion algorithm thereof extract respectively.The gray-scale map of CVL, FEI database used in experiment is that using formula Gray=R × 0.299+G × 0.587+B × 0.114 is converted to by coloured image.
Can see that from table 2 coloured image M-FPCA method is 90.00% at the highest discrimination in FEI colored human face storehouse, the highest discrimination on CVL colored human face storehouse is 89.09%.
Table 3 crosses complete blocking characteristic abstracting method c=3 on FEI and CVL colored human face storehouse, the discrimination (%) when 4 based on the coloured image of NIB2DPCA
In table 3, represent the image block block number used, runic represents the highest discrimination.
When when 3, image block one used is entire image size, image block two used take eyes as content the image block of size and image block three with left eye and nose for content the image block of size.To be image block one be respectively l=5 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number for the precondition of the experimental result on FEI face database, r=40, image block two is respectively l=7 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number, r=40, image block three is respectively l=5, r=30 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number; To be image block one be respectively l=4 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number for the precondition of the experimental result on CVL face database, r=25, image block two is respectively l=6 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number, r=27, image block three is respectively l=5, r=28 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number.
When when 4, image block one used is entire image size, image block two used take eyes as content the image block of size, image block three with left eye and nose for content the image block of size and image block four are content with right eye and nose the image block of size.To be image block one be respectively l=5 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number for the precondition of the experimental result on FEI face database, r=40, image block two is respectively l=8 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number, r=30, image block three is respectively l=5 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number, r=35, image block four is respectively l=5, r=35 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number; To be image block one be respectively l=4 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number for the precondition of the experimental result on CVL face database, r=25, image block two is respectively l=7 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number, r=27, image block three is respectively l=5 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number, r=28, image block four is respectively l=5, r=28 as the row characteristic number of the submodule feature extraction of training sample and row characteristic number; In table 3 with represent row characteristic number and the row characteristic number of test sample book feature extraction respectively.
As can be seen from Table 3, it is 95.50% that coloured image based on NIB2DPCA crosses the highest discrimination of complete blocking characteristic abstracting method on FEI colored human face storehouse, compare with M-FPCA method with coloured image NIB2DPCA method, improve 4 percentage points and 5.5 percentage points respectively; It is 92.73% that coloured image based on NIB2DPCA crosses the highest discrimination of complete blocking characteristic abstracting method on CVL colored human face storehouse, compare with coloured image M-FPCA method with coloured image NIB2DPCA method, improve 4.55 percentage points and 3.64 percentage points respectively.Meanwhile, cross complete blocking characteristic abstracting method and coloured image NIB2DPCA method and coloured image M-FPCA method based on the coloured image of NIB2DPCA is the same order of magnitude on feature extraction time complexity.
Contrast in table 3 3 Hes result when 4 shows, the block number of color image recognition efficiency and image not proportional, and the size of concrete image block and image block block number just can reach best effect and need research.
The present invention carried out complete piecemeal to coloured image, and carried out feature extraction to each coloured image block, and carried out fusion reconstruct.The feature crossing complete piecemeal is there is overlapping part between block with block, and the image that block and block combine, much larger than original image, therefore, can obtain abundanter characteristic information, can improve the discrimination of coloured image.
The present invention is applied to recognition of face, contrast experiment on FEI and CVL two normal color face databases shows, the accuracy rate of the recognition of face of the method proposed improves about 4 percentage points than coloured image NIB2DPCA method, improves about 5 percentage points than coloured image M-FPCA method.Although, the present invention is applied to the accuracy rate of recognition of face a little less than the dark face identification method hiding identity characteristic being benchmark with the complete piecemeal of mistake, but the face identification method of the dark hiding identity characteristic being benchmark with the complete piecemeal of mistake is higher to calculation requirement, calculated amount is comparatively large, and the sample training time that the sample training time of needs needs than the present invention exceeds several orders of magnitude.Therefore, the present invention has more better application, particularly for current various mobile device.Meanwhile, the present invention has good sustainable research on the basis ensureing higher discrimination.
The present invention shows when block count is at the same order of magnitude in contrast experiment, the accuracy rate of color image recognition and the block count of image not proportional, make the rate of accuracy reached of color image recognition to peak value, further investigation is needed for the division size of each piecemeal and the number of piecemeal.Further, point block size and the block count object curved line relation of varying number level can be studied, obtain the peak value of varying number level coloured image piecemeal recognition accuracy.
The above; for the present invention is only the present invention in field of face identification preferably embodiment; but protection scope of the present invention is not limited thereto; the present invention can be used for the identification of any coloured image; and be anyly familiar with those skilled in the art in the technical scope that the present invention discloses; be equal to according to technical scheme of the present invention and inventive concept thereof and replace or change, all should be encompassed within protection scope of the present invention.

Claims (2)

1. coloured image crosses a complete blocking characteristic abstracting method, it is characterized in that, comprises the steps:
S1, color catalog image is carried out piecemeal;
Adopted complete macroblock mode, image is divided into multiple modules not of uniform size, has the part overlapped between module with module, the image that all block combiner get up is greater than original image;
The summation of the complete macroblock mode of described mistake and piecemeal is greater than original image;
S2, according to rgb color space, channel information decomposition is carried out to the subimage module obtained in step S1, obtain the pseudo-gray level image matrix of R, G, B tri-of subimage module;
S3, according to the pseudo-gray level image matrix of R, G, B of obtaining in step S2 tri-, with based on without iteration bilateral two-dimensional principal component analysis method and NIB2DPCA method, projection matrix and premultiplication projection matrix are taken advantage of to the pseudo-gray level image Matrix Calculating right side of R, G, B tri-respectively;
S4, respectively to the pseudo-gray level image matrix of R, G, B of obtaining in step S2, take advantage of projection matrix and premultiplication projection matrix to carry out bilateral projection with the right side obtained in step S3, obtain the eigenmatrix of three channels;
S5, the eigenmatrix of three channels obtained in step S4 to be merged, obtain new two-dimensional matrix;
S6, the submodule image that step S1 obtains is processed one by one according to step S2-S5 after merge, thus obtain the two-dimensional matrix of complete image;
S7, regard the two-dimensional matrix of the complete image obtained in step S6 as new original two-dimensional image array, projection matrix and premultiplication projection matrix are taken advantage of in the right side of the original two-dimensional image array again asking this new, finally try to achieve eigenmatrix, and try to achieve the eigenmatrix of each width training sample with reference to the method;
S8, try to achieve the eigenmatrix of test sample book according to the same method of step S2 ~ S7, and the eigenmatrix of the eigenmatrix of the test sample book obtained and each secondary training sample is compared classification, close just point at same class, expression identifies.
2. coloured image according to claim 1 crosses complete blocking characteristic abstracting method, it is characterized in that:
In step sl, piecemeal is treated color catalog image is A i(1≤i≤N), N represents the number of all training samples, and m represents the line number of color catalog matrix, and n represents the columns of color catalog;
Make X=A i,
Wherein q represents columns, and p represents line number, and X is divided into individual submodule, X ij(1≤i≤m, 1≤j≤n) is that colored subimage module of the i-th row jth row in individual submodule, by X ijbe converted into (1≤ ≤ m, 1≤ ≤ n) colored subimage matrix;
In step s 2, it is the subimage module X obtained step S1 according to rgb color space that described channel information decomposes ijcarry out channel information decomposition, obtain the pseudo-gray level image matrix of R, G, B tri-;
Described is that gray level image matrix representation is as follows:
(S2-1)
(S2-2)
(S2-3)
The X obtained through step S1 ij(1≤i≤m, 1≤j≤n) is divided into 3 channels, namely , , , wherein variable i, j, m, n, , consistent with step S1;
In step s3, described NIB2DPCA method is specially,
Setting training sample number is M, matrix U represent premultiplication projection matrix, matrix V represent that projection matrix is taken advantage of on the right side, A i(1≤i≤m) is image array, to A icarry out bilateral projection, obtain eigenmatrix Y i(1≤i≤m) namely:
(1)
The line number of m representing matrix U, the line number of n representing matrix V, the columns of representing matrix U, the columns of r representing matrix V;
In order to obtain premultiplication projection matrix and projection matrix is taken advantage of on the right side, use the mark of the covariance of projection properties matrix, i.e. the total population scatter matrix of projection properties matrix, its algorithmic formula is
(2)
Premultiplication projection matrix U is regarded as unit matrix, then formula (2) is converted into:
(3)
Equation (3) after conversion is maximization formula (3), and the vector maximizing formula (3) is that projection vector is taken advantage of on the right side, and then the right side takes advantage of projection vector matrix to be
(4)
If the above-mentioned right side takes advantage of front r eigenvalue of maximum characteristic of correspondence vector of projection vector matrix to be V 1, V 2... V r, each proper vector is a column data, and wherein, proper vector sum in projection vector matrix is taken advantage of on 1≤r≤right side, and described proper vector comprises the row feature of original image matrix, and therefore r proper vector just constitutes eigenmatrix, thus be the right side and take advantage of projection matrix;
Described V is regarded as unit matrix, then formula (2) is converted into:
(5)
The vector of maximization formula (5) is called premultiplication projection vector; And then premultiplication projection vector matrix is
(6)
If before above-mentioned premultiplication projection vector matrix individual eigenvalue of maximum characteristic of correspondence vector is , each proper vector is a column data, wherein, 1≤ proper vector sum in≤premultiplication projection vector matrix, described proper vector comprises original image matrix column feature, therefore individual proper vector just constitutes eigenmatrix, thus be premultiplication projection matrix;
Projection matrix premultiplication projection matrix obtained above and the right side is taken advantage of to substitute into formula , obtain the eigenmatrix of NIB2DPCA;
According to said method, projection vector matrix is taken advantage of on the right side according to the pseudo-gray level image matrix NIB2DPCA method of R channel, tries to achieve:
(S3-1)
If the above-mentioned right side takes advantage of front r eigenvalue of maximum characteristic of correspondence vector of projection vector matrix to be , ..., , each proper vector is a column data, and wherein, proper vector sum in projection vector matrix is taken advantage of on 1≤r≤right side, and described proper vector comprises the row feature of original image matrix, and therefore r proper vector just constitutes eigenmatrix, thus be the right side take advantage of projection matrix;
According to the premultiplication projection vector matrix of NIB2DPCA method, ask:
(S3-2)
If before above-mentioned premultiplication projection vector matrix individual eigenvalue of maximum characteristic of correspondence vector is , each proper vector is a column data, wherein, 1≤ proper vector sum in≤premultiplication projection vector matrix, described proper vector comprises original image matrix column feature, and therefore r proper vector just constitutes eigenmatrix, thus be premultiplication projection matrix;
In like manner, obtain according to the pseudo-gray level image matrix NIB2DPCA method of G channel the right side take advantage of projection matrix and premultiplication projection matrix to be respectively with ;
In like manner, obtain according to the pseudo-gray level image matrix NIB2DPCA method of B channel the right side take advantage of projection matrix and premultiplication projection matrix to be respectively with ;
The transposition of the T representing matrix described in step S3;
In step s 4 which, to the pseudo-gray level image matrix of R, G, B tri-that step S2 obtains , , carry out bilateral projection, the eigenmatrix obtaining three channels is as follows respectively:
(S4-1)
(S4-2)
(S4-3)
The transposition of the T representing matrix described in step S4;
In step s 5, the eigenmatrix of three channels that step S4 is obtained merges, and obtains new two-dimensional matrix and is expressed as:
(S5-1)
Wherein vec (*) represents the vectorization to matrix *;
The transposition of the T representing matrix described in step S5;
In step s 6, the two-dimensional matrix obtaining complete image is,
(S6-1);
In the step s 7, finally eigenmatrix is tried to achieve
(S7-1)
To each width training sample A ithe eigenmatrix of (1≤i≤N) is Y i
(S4-2)
The transposition of the T representing matrix described in step S7;
In step s 8, suppose there be N1 test sample book, the eigenmatrix Y that each test sample book is tried to achieve k(1≤i≤N1) and Y icompare classification.
CN201510865923.8A 2015-12-01 2015-12-01 The excessively complete blocking characteristic abstracting method of color image Active CN105447468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510865923.8A CN105447468B (en) 2015-12-01 2015-12-01 The excessively complete blocking characteristic abstracting method of color image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510865923.8A CN105447468B (en) 2015-12-01 2015-12-01 The excessively complete blocking characteristic abstracting method of color image

Publications (2)

Publication Number Publication Date
CN105447468A true CN105447468A (en) 2016-03-30
CN105447468B CN105447468B (en) 2019-04-16

Family

ID=55557628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510865923.8A Active CN105447468B (en) 2015-12-01 2015-12-01 The excessively complete blocking characteristic abstracting method of color image

Country Status (1)

Country Link
CN (1) CN105447468B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765852A (en) * 2019-09-09 2020-02-07 珠海格力电器股份有限公司 Method and device for acquiring face direction in image
CN113191386A (en) * 2021-03-26 2021-07-30 中国矿业大学 Chromosome classification model based on grid reconstruction learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1979523A (en) * 2006-11-02 2007-06-13 中山大学 2-D main-element human-face analysis and identifying method based on relativity in block
CN101021897A (en) * 2006-12-27 2007-08-22 中山大学 Two-dimensional linear discrimination human face analysis identificating method based on interblock correlation
US20080013798A1 (en) * 2006-06-12 2008-01-17 Fotonation Vision Limited Advances in extending the aam techniques from grayscale to color images
CN104318219A (en) * 2014-10-31 2015-01-28 上海交通大学 Face recognition method based on combination of local features and global features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013798A1 (en) * 2006-06-12 2008-01-17 Fotonation Vision Limited Advances in extending the aam techniques from grayscale to color images
CN1979523A (en) * 2006-11-02 2007-06-13 中山大学 2-D main-element human-face analysis and identifying method based on relativity in block
CN101021897A (en) * 2006-12-27 2007-08-22 中山大学 Two-dimensional linear discrimination human face analysis identificating method based on interblock correlation
CN104318219A (en) * 2014-10-31 2015-01-28 上海交通大学 Face recognition method based on combination of local features and global features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《计算机应用》 *
《计算机应用研究》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765852A (en) * 2019-09-09 2020-02-07 珠海格力电器股份有限公司 Method and device for acquiring face direction in image
CN110765852B (en) * 2019-09-09 2022-06-14 珠海格力电器股份有限公司 Method and device for acquiring face direction in image
CN113191386A (en) * 2021-03-26 2021-07-30 中国矿业大学 Chromosome classification model based on grid reconstruction learning
CN113191386B (en) * 2021-03-26 2023-11-03 中国矿业大学 Chromosome classification model based on grid reconstruction learning

Also Published As

Publication number Publication date
CN105447468B (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN111814719B (en) Skeleton behavior recognition method based on 3D space-time diagram convolution
CN107292813A (en) A kind of multi-pose Face generation method based on generation confrontation network
CN110263912A (en) A kind of image answering method based on multiple target association depth reasoning
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN107506722A (en) One kind is based on depth sparse convolution neutral net face emotion identification method
CN111681178B (en) Knowledge distillation-based image defogging method
CN112990296B (en) Image-text matching model compression and acceleration method and system based on orthogonal similarity distillation
CN106570464A (en) Human face recognition method and device for quickly processing human face shading
CN108038420A (en) A kind of Human bodys' response method based on deep video
CN105373777A (en) Face recognition method and device
CN105095857B (en) Human face data Enhancement Method based on key point perturbation technique
CN109299701A (en) Expand the face age estimation method that more ethnic group features cooperate with selection based on GAN
CN110175248A (en) A kind of Research on face image retrieval and device encoded based on deep learning and Hash
CN105893947A (en) Bi-visual-angle face identification method based on multi-local correlation characteristic learning
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN106815854A (en) A kind of Online Video prospect background separation method based on normal law error modeling
CN107066979A (en) A kind of human motion recognition method based on depth information and various dimensions convolutional neural networks
CN104318215A (en) Cross view angle face recognition method based on domain robustness convolution feature learning
CN103295019A (en) Self-adaptive Chinese fragment restoration method based on probability statistics
CN105447468A (en) Color image over-complete block feature extraction method
CN113239866B (en) Face recognition method and system based on space-time feature fusion and sample attention enhancement
CN111401116A (en) Bimodal emotion recognition method based on enhanced convolution and space-time L STM network
CN117115911A (en) Hypergraph learning action recognition system based on attention mechanism
CN112528077A (en) Video face retrieval method and system based on video embedding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211210

Address after: 1302-3, floor 13, No. 6, Zhongguancun South Street, Haidian District, Beijing 100086

Patentee after: Beijing Wanzhi Qianhong Technology Co., Ltd

Address before: 214121 Wuxi Institute of Technology, 1600 Gao Lang Xi Road, Wuxi, Jiangsu

Patentee before: WUXI INSTITUTE OF TECHNOLOGY

TR01 Transfer of patent right