CN104050452A - Facial image illumination removal method based on DCT and partial standardization - Google Patents

Facial image illumination removal method based on DCT and partial standardization Download PDF

Info

Publication number
CN104050452A
CN104050452A CN201410283151.2A CN201410283151A CN104050452A CN 104050452 A CN104050452 A CN 104050452A CN 201410283151 A CN201410283151 A CN 201410283151A CN 104050452 A CN104050452 A CN 104050452A
Authority
CN
China
Prior art keywords
illumination
dct
image
sigma
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410283151.2A
Other languages
Chinese (zh)
Inventor
赵明华
莫瑞阳
丁晓枫
原永芹
王映辉
曹慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201410283151.2A priority Critical patent/CN104050452A/en
Publication of CN104050452A publication Critical patent/CN104050452A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

Disclosed is a facial image illumination removal method based on DCT and partial standardization. The method comprises the steps that normalization processing is carried out on the sizes of images in an input image set, then a digital image f(x, y) is selected to be converted to a logarithm domain for DCT, and a two-dimensional DCT coefficient matrix F is obtained; abandoning processing is carried out on low-frequency DCT coefficients to obtain a matrix F'; two-dimensional discrete inverse transformation is carried out on the matrix F' to obtain an image with illumination removed within a large range; the image with illumination removed within the large range is divided into small blocks for partial standardization processing, and the final standardizing image is obtained. According to the facial image illumination removal method based on DCT and partial standardization, illumination is removed within the large range based on two-dimensional DCT, illumination is removed within a small range based on partial standardization, the influence of illumination changes on the input images is obviously reduced, and the facial recognition rate under the large illumination condition is improved.

Description

Facial image based on dct transform and local standard removes illumination method
Technical field
The invention belongs to technical field of image processing, be specifically related to a kind of facial image based on dct transform and local standard and remove illumination method.
Background technology
In recent years, recognition of face research has obtained paying close attention to widely.Illumination, attitude and the large problem of expression three are the key factors that affects recognition of face precision always, and wherein illumination factor, the particularly variation of physical environment light are not artificially can control, and therefore photo-irradiation treatment is the step that each face identification system must carry out.Most of face recognition systems are made certain limitation to illumination condition conventionally, suppose that pending image obtains under based on uniform illumination condition, they only allow interior among a small circle illumination condition to change, but actual illumination condition is often inhomogeneous, excessively bright, excessively dark, shade that polarisation, sidelight, high photoconduction cause all can make effect significantly decline.Therefore, how to reduce illumination and obtained for the impact of recognition of face numerous researchists' concern.
At present, the method for processing illumination variation human face identification problem mainly comprises three kinds: extract the feature of illumination invariant, illumination variation modeling and illumination condition are standardized.First method is to extract the invariant features of different illumination conditions human face or the target that is characterized as to illumination-insensitive, and hope can reach higher recognition correct rate by these features.The thought of this method is very direct, but Moses and Adini etc. theoretically and experimental result show, " can overcome the variation of direction of illumination without any expression itself ".Second method is that the imaging under illumination variation is set up to model, in a suitable subspace, represents the variation that illumination causes.Experiment shows that this method has obtained good effect, but calculated amount is larger, is difficult to quick realization.The third method is carried out certain normalization or Regularization to facial image emphatically, eliminates the impact of illumination variation before recognition of face.This method can obtain the image of canonical form, can be applied in any existing face identification method.
Summary of the invention
The object of this invention is to provide a kind of facial image based on dct transform and local standard and remove illumination method, can effectively remove the impact of illumination, improved the recognition of face rate under larger illumination condition.
The technical solution adopted in the present invention is that the facial image based on dct transform and local standard removes illumination method, specifically comprises the following steps:
Step 1, concentrates image size to be normalized to input picture, then selects a wherein width digital picture f (x, y) to transform to log-domain and carry out two-dimension discrete cosine transform (dct transform), obtains two dimensional DCT coefficients matrix F;
Step 2, to the two dimensional DCT coefficients matrix F of step 1 gained, abandons processing by low frequency DCT coefficient, obtain matrix F ';
Step 3, for all matrix F that obtain of step 2 ' carry out two-dimensional discrete inverse transformation, obtain illumination on a large scale remove image;
Step 4, the image that the illumination on a large scale that step 3 is obtained is removed is divided into one by one 5 × 5 and carries out local standard processing, obtains final standardized images.
Feature of the present invention is also,
Step 1 is specially: by two-dimension discrete cosine transform, by during image is from spatial transform to frequency domain, two-dimension discrete cosine transform formula is as follows:
F ( u , v ) = c ( u ) c ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 1 )
In formula (1), u=0,1,2 ..., M-1; V=0,1,2 ..., N-1; X=0,1,2 ..., M-1; Y=0,1,2 ..., N-1, the height of M presentation video, N presentation video wide.
c ( u ) = 1 / M , u = 0 2 / M , 1 ≤ u ≤ M - 1 , c ( v ) = 1 / N , v = 0 2 / N , 1 ≤ v ≤ N - 1 ;
By F (u, v) the formation two dimensional DCT coefficients matrix F identical with changing image size, F (0,0) is DC coefficient, and all the other are with regard to AC coefficient.
Step 2 is specially: be set to 0 and the low-frequency component of facial image is abandoned by low frequency DCT coefficient, reach the object of removing illumination effect on a large scale; Low frequency DCT coefficient is set to 0 and is equivalent to the product that deducts DCT base image and corresponding low frequency DCT coefficient from original image; Order:
T ( u , v ) = c ( u ) c ( v ) F ( u , v ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 2 )
N in lower frequency region low frequency DCT coefficient is made as to 0, can be obtained by formula (1) and formula (2):
f ′ ( x , y ) = Σ u = 0 M - 1 Σ v = 0 N - 1 T ( u , v ) - Σ i = 1 n T ( u i , v i ) = f ( x , y ) - Σ i = 1 n T ( u i , v i ) - - - ( 3 )
Convolution (2) and formula (3), will be approximately illumination item, f'(x, y) as the facial image under the standardization illumination in log-domain; The DCT coefficient of abandoning log-domain medium and low frequency is equivalent to be removed illumination;
To abandon coefficient be m in order, and front m (m+1)/2 DCT coefficient when zigzag scan is set to 0; Meanwhile, consider in DC coefficient and comprise more image information, be set to fixed value, formula is as follows:
F ( 0,0 ) = log μ · MN = log ( 1 PMN Σ t = 1 P Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) MN ) - - - ( 4 )
In formula (4), μ is the pixel grey scale average of all images in input picture collection, and P is the number of image in collection.
In step 3, two-dimensional discrete inverse transformation formula is as follows:
f ( x , y ) = Σ u = 0 M - 1 Σ v = 0 N - 1 c ( u ) c ( v ) F ( u , v ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 5 ) .
In step 4, local standard processing is in the dough sheet under certain illumination condition, to g'(x, y) process, obtain s (x, y), make it have zero mathematical expectation and unit standard deviation;
s ( x , y ) = g ′ ( x , y ) - E ( g ′ ( x , y ) ) Var ( g ′ ( x , y ) ) - - - ( 6 )
Because the image under non-standard illumination condition is identical with the image under standard illumination condition through the result of local standard processing through the result of local standard processing, therefore, through type (6) obtains standardized images.
The invention has the beneficial effects as follows, the facial image that the present invention is based on dct transform and local standard removes illumination method, carrying out illumination on a large scale based on two-dimension discrete cosine transform removes, carries out the removal of illumination among a small circle based on local standard, obviously reduce the impact of illumination variation on input picture, improved the recognition of face rate under larger illumination condition, and the inventive method has stronger robustness to polarisation and high light.
Brief description of the drawings
Fig. 1 is the process flow diagram that the facial image that the present invention is based on dct transform and local standard removes illumination method;
Fig. 2 selects the different standardized images that coefficient obtains of abandoning, the discrimination that adopts Furthest Neighbor to calculate;
Fig. 3 selects the different standardized images that coefficient obtains of abandoning, the discrimination that adopts PCA method to calculate;
Fig. 4 uses 3 × 3,5 × 5,7 × 7,9 × 9, and 11 × 11 and 13 × 13 piece carries out after standardization image, the discrimination that utilizes Furthest Neighbor to obtain;
Fig. 5 uses 3 × 3,5 × 5,7 × 7,9 × 9, and 11 × 11 and 13 × 13 piece carries out after standardization image, the discrimination that utilizes PCA method to obtain;
Fig. 6 be DC coefficient be set to 0 and according to formula (4), DC coefficient is arranged after, the discrimination that service range method obtains;
Fig. 7 be DC coefficient be set to 0 and according to formula (4), DC coefficient is arranged after, the discrimination that uses PCA method to obtain;
Fig. 8 be DC coefficient be set to 0 and according to formula (4), DC coefficient is arranged after, service range method obtains the discrimination in subset 4;
Fig. 9 be DC coefficient be set to 0 and according to formula (4), DC coefficient is arranged after, service range method obtains the discrimination in subset 5;
Figure 10 is in log-domain and non-log-domain, the error recognition rate obtaining in subset 4;
Figure 11 is in log-domain and non-log-domain, the error recognition rate obtaining in subset 5.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.
As shown in Figure 1, the facial image that the present invention is based on dct transform and local standard removes illumination method, specifically comprises the following steps:
Step 1, concentrates image size to be normalized to input picture, then selects a wherein width digital picture f (x, y) to transform to log-domain and carry out two-dimension discrete cosine transform (dct transform), obtains two dimensional DCT coefficients matrix F;
Be specially: by two-dimension discrete cosine transform, by during image is from spatial transform to frequency domain, two-dimension discrete cosine transform formula is as follows:
F ( u , v ) = c ( u ) c ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 1 )
In formula (1), u=0,1,2 ..., M-1; V=0,1,2 ..., N-1; X=0,1,2 ..., M-1; Y=0,1,2 ..., N-1, the height of M presentation video, N presentation video wide.
c ( u ) = 1 / M , u = 0 2 / M , 1 ≤ u ≤ M - 1 , c ( v ) = 1 / N , v = 0 2 / N , 1 ≤ v ≤ N - 1 ;
By F (u, v) the formation two dimensional DCT coefficients matrix F identical with changing image size, F (0,0) is DC coefficient, and all the other are with regard to AC coefficient;
Step 2, to the two dimensional DCT coefficients matrix F of step 1 gained, abandons processing by low frequency DCT coefficient, obtain matrix F ';
Be specially: be set to 0 and the low-frequency component of facial image is abandoned by low frequency DCT coefficient, reach the object of removing illumination effect on a large scale; Low frequency DCT coefficient is set to 0 and is equivalent to the product that deducts DCT base image and corresponding low frequency DCT coefficient from original image; Order:
T ( u , v ) = c ( u ) c ( v ) F ( u , v ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 2 )
N in lower frequency region low frequency DCT coefficient is made as to 0, can be obtained by formula (1) and formula (2):
f ′ ( x , y ) = Σ u = 0 M - 1 Σ v = 0 N - 1 T ( u , v ) - Σ i = 1 n T ( u i , v i ) = f ( x , y ) - Σ i = 1 n T ( u i , v i ) - - - ( 3 )
Convolution (2) and formula (3), will be approximately illumination item, f'(x, y) as the facial image under the standardization illumination in log-domain; The DCT coefficient of abandoning log-domain medium and low frequency is equivalent to be removed illumination;
To abandon coefficient be m in order, and front m (m+1)/2 DCT coefficient when zigzag scan is set to 0; Meanwhile, consider in DC coefficient and comprise more image information, be set to fixed value, formula is as follows:
F ( 0,0 ) = log μ · MN = log ( 1 PMN Σ t = 1 P Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) MN ) - - - ( 4 )
In formula (4), μ is the pixel grey scale average of all images in input picture collection, and P is the number of image in collection;
Step 3, for all matrix F that obtain of step 2 ' carry out two-dimensional discrete inverse transformation, obtain illumination on a large scale remove image;
Two-dimensional discrete inverse transformation formula is as follows:
f ( x , y ) = Σ u = 0 M - 1 Σ v = 0 N - 1 c ( u ) c ( v ) F ( u , v ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 5 ) ;
Step 4, the image that the illumination on a large scale that step 3 is obtained is removed is divided into one by one 5 × 5 and carries out local standard processing, obtains final standardized images;
Local standard processing is in the dough sheet under certain illumination condition, to g'(x, y) process, obtain s (x, y), make it have zero mathematical expectation and unit standard deviation;
s ( x , y ) = g ′ ( x , y ) - E ( g ′ ( x , y ) ) Var ( g ′ ( x , y ) ) - - - ( 6 )
Because the image under non-standard illumination condition is identical with the image under standard illumination condition through the result of local standard processing through the result of local standard processing, therefore, through type (6) obtains standardized images.
The facial image that the present invention is based on dct transform and local standard removes illumination method, carrying out illumination on a large scale based on two-dimension discrete cosine transform removes, carries out the removal of illumination among a small circle based on local standard, obviously reduce the impact of illumination variation on input picture, improved the recognition of face rate under larger illumination condition, and the inventive method has stronger robustness to polarisation and high light.
From principle aspect, the inventive method is described below.
(1) concentrate everyone 8 width images of choosing at random 20 people as training sample from 5 sons of Extended Yale B database, choose everyone other 32 width images of these 20 people as test sample book.Adopt the inventive method to process, select differently while abandoning coefficient, use 5 × 5 piece to carry out after standardization image, utilize respectively discrimination that Furthest Neighbor and PCA method obtain as shown in Figures 2 and 3.From Fig. 2 and 3, discrimination reaches the highest between 6~20 time abandoning coefficient.Main cause is to abandon coefficient when too low, and illumination can not well be removed; Abandon coefficient when too much, many useful authentication informations have been lost.
(2) select 12 and abandon coefficient, use respectively 3 × 3,5 × 5,7 × 7,9 × 9,11 × 11 and 13 × 13 piece carries out after standardization image, utilizes respectively discrimination that Furthest Neighbor and PCA method obtain as shown in Figures 4 and 5.In Fig. 4 and Fig. 5, when horizontal ordinate represents to carry out local standardization processing, the size of selected, ordinate represents the discrimination obtaining under different masses size cases.From Fig. 4 and Fig. 5, use 5 × 5,7 × 7, when 9 × 9,11 × 11 and 13 × 13 piece carries out standardization to image, all reach very high discrimination.Consider discrimination and calculated amount, we use 5 × 5 to carry out local standard processing.
(3) concentrate everyone 8 width images of choosing at random 20 people as training sample from 5 sons of Extended Yale B database, choose these 20 people's everyone other 32 width images and test as test sample book.In the time going photo-irradiation treatment, directly DC coefficient be set to 0 and according to formula (4), DC coefficient is arranged after, the discrimination that service range method and PCA method obtain is respectively as shown in Figures 6 and 7.
Secondly,, using the 1st subset of Yale B database as training set, respectively the 4th subset and the 5th subset are tested as test set.In the time going photo-irradiation treatment, directly DC coefficient be set to 0 and according to formula (4), DC coefficient is arranged after, the discrimination in subset 4 and subset 5 that service range method obtains is as shown in FIG. 8 and 9.From Fig. 6~9, the discrimination that the DC coefficient shown in use formula (4) obtains is obviously better than DC coefficient and is set to zero result obtaining, and has shown to contain more image information in DC coefficient, can not arbitrarily abandon.
(4) use the subset 1 of Yale B face database as training sample, use respectively subset 4 and subset 5 to test as test sample book.In log-domain and non-log-domain, the error recognition rate obtaining in subset 4 and subset 5 respectively as shown in FIG. 10 and 11.From Figure 10,11, directly in log-domain, carry out subordinate phase and deluster according to carrying out subordinate phase in non-log-domain and deluster and be more or less the same according to the recognition result obtaining with its reversion being changed to.Main cause is that the error that the local standard method of subordinate phase causes non-log-domain compensates.
Compare discovery by the illumination standardization result to log-domain and non-log-domain, although go after photo-irradiation treatment in the first stage, the result of log-domain and non-log-domain differs larger, and after the local standardization of subordinate phase is processed, two images are very nearly the same.
Concentrate from 5 sons 6 sample composition training sets choosing everyone respectively, use the residue sample of this subset and all samples of other subsets to test as test set.Use the inventive method to go after photo-irradiation treatment, the error recognition rate that directly service range method produces is as shown in table 1.
The error recognition rate that table 1 uses different training samples and test sample book to produce
In table 1, the training subset of line display experimental selection, the test subset of experimental selection is shown in list.As shown in Table 1, when using subset 1, subset 2, subset 3, subset 4 as training sample, while using subset 1, subset 2 and subset 3 as test sample book, error recognition rate is 0; When training set or test set are while being subject to the subset of polarisation or high influence of light, there is less identification error phenomenon.
Table 2 has provided the comparison at the error recognition rate of Yale B face database of several frequently seen photo-irradiation treatment method and the inventive method.The 1st subset of experimental selection YALE B database, as training set, used respectively the 3rd, 4,5 subsets as test set.
Table 2 is not planted the comparison of method at the error recognition rate of Yale B face database
Data from table 2 can be found out, adopt the inventive method to deluster according to image after treatment, and the error recognition rate obtaining in subset 4 and subset 5 is only 1.43% and 0.53%, is obviously better than other; In addition the error recognition rate of the present invention's image after treatment in subset 5, lower than the error recognition rate in subset 4, shows that the present invention has stronger robustness to polarisation and high light.

Claims (5)

1. the facial image based on dct transform and local standard removes illumination method, it is characterized in that, specifically comprises the following steps:
Step 1, concentrates image size to be normalized to input picture, then selects a wherein width digital picture f (x, y) to transform to log-domain and carry out two-dimension discrete cosine transform, obtains two dimensional DCT coefficients matrix F;
Step 2, to the two dimensional DCT coefficients matrix F of step 1 gained, abandons processing by low frequency DCT coefficient, obtain matrix F ';
Step 3, for all matrix F that obtain of step 2 ' carry out two-dimensional discrete inverse transformation, obtain illumination on a large scale remove image;
Step 4, the image that the illumination on a large scale that step 3 is obtained is removed is divided into one by one 5 × 5 and carries out local standard processing, obtains final standardized images.
2. the facial image based on dct transform and local standard according to claim 1 removes illumination method, it is characterized in that, step 1 is specially: by two-dimension discrete cosine transform, by during image is from spatial transform to frequency domain, two-dimension discrete cosine transform formula is as follows:
F ( u , v ) = c ( u ) c ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 1 )
In formula (1), u=0,1,2 ..., M-1; V=0,1,2 ..., N-1; X=0,1,2 ..., M-1; Y=0,1,2 ..., N-1, the height of M presentation video, N presentation video wide;
c ( u ) = 1 / M , u = 0 2 / M , 1 ≤ u ≤ M - 1 , c ( v ) = 1 / N , v = 0 2 / N , 1 ≤ v ≤ N - 1 ;
By F (u, v) the formation two dimensional DCT coefficients matrix F identical with changing image size, F (0,0) is DC coefficient, and all the other are with regard to AC coefficient.
3. the facial image based on dct transform and local standard according to claim 1 removes illumination method, it is characterized in that, step 2 is specially: be set to 0 and the low-frequency component of facial image is abandoned by low frequency DCT coefficient, reach the object of removing illumination effect on a large scale; Low frequency DCT coefficient is set to 0 and is equivalent to the product that deducts DCT base image and corresponding low frequency DCT coefficient from original image; Order:
T ( u , v ) = c ( u ) c ( v ) F ( u , v ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 2 )
N in lower frequency region low frequency DCT coefficient is made as to 0, can be obtained by formula (1) and formula (2):
f ′ ( x , y ) = Σ u = 0 M - 1 Σ v = 0 N - 1 T ( u , v ) - Σ i = 1 n T ( u i , v i ) = f ( x , y ) - Σ i = 1 n T ( u i , v i ) - - - ( 3 )
Convolution (2) and formula (3), will be approximately illumination item, f'(x, y) as the facial image under the standardization illumination in log-domain; The DCT coefficient of abandoning log-domain medium and low frequency is equivalent to be removed illumination;
To abandon coefficient be m in order, and front m (m+1)/2 DCT coefficient when zigzag scan is set to 0; Meanwhile, consider in DC coefficient and comprise more image information, be set to fixed value, formula is as follows:
F ( 0,0 ) = log μ · MN = log ( 1 PMN Σ t = 1 P Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) MN ) - - - ( 4 )
In formula (4), μ is the pixel grey scale average of all images in input picture collection, and P is the number of image in collection.
4. the facial image based on dct transform and local standard according to claim 1 removes illumination method, it is characterized in that, in step 3, two-dimensional discrete inverse transformation formula is as follows:
f ( x , y ) = Σ u = 0 M - 1 Σ v = 0 N - 1 c ( u ) c ( v ) F ( u , v ) cos ( 2 x + 1 ) uπ 2 M cos ( 2 y + 1 ) vπ 2 N - - - ( 5 ) .
5. the facial image based on dct transform and local standard according to claim 1 removes illumination method, it is characterized in that, in step 4, local standard processing is in the dough sheet under certain illumination condition, to g'(x, y) process, obtain s (x, y), make it have zero mathematical expectation and unit standard deviation;
s ( x , y ) = g ′ ( x , y ) - E ( g ′ ( x , y ) ) Var ( g ′ ( x , y ) ) - - - ( 6 )
Because the image under non-standard illumination condition is identical with the image under standard illumination condition through the result of local standard processing through the result of local standard processing, therefore, through type (6) obtains standardized images.
CN201410283151.2A 2014-06-23 2014-06-23 Facial image illumination removal method based on DCT and partial standardization Pending CN104050452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410283151.2A CN104050452A (en) 2014-06-23 2014-06-23 Facial image illumination removal method based on DCT and partial standardization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410283151.2A CN104050452A (en) 2014-06-23 2014-06-23 Facial image illumination removal method based on DCT and partial standardization

Publications (1)

Publication Number Publication Date
CN104050452A true CN104050452A (en) 2014-09-17

Family

ID=51503265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410283151.2A Pending CN104050452A (en) 2014-06-23 2014-06-23 Facial image illumination removal method based on DCT and partial standardization

Country Status (1)

Country Link
CN (1) CN104050452A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
US20130170759A1 (en) * 2007-11-29 2013-07-04 Viewdle Inc. Method and System of Person Identification by Facial Image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130170759A1 (en) * 2007-11-29 2013-07-04 Viewdle Inc. Method and System of Person Identification by Facial Image
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINGHUA ZHAO ET AL.: ""The discrete cosine transform(DCT)plus local normalization:a novel two-stage method for de-illumination in face recognition"", 《OPTICA APPLICATA》 *
赵明华: ""人脸检测和识别技术的研究"", 《中国博士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
US8509536B2 (en) Character recognition device and method and computer-readable medium controlling the same
CN112434646A (en) Finished tea quality identification method based on transfer learning and computer vision technology
CN106127688B (en) A kind of super-resolution image reconstruction method and its system
Wu et al. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising
CN108829711B (en) Image retrieval method based on multi-feature fusion
CN103049760B (en) Based on the rarefaction representation target identification method of image block and position weighting
Tsneg et al. Defect detection of uneven brightness in low-contrast images using basis image representation
CN104700089A (en) Face identification method based on Gabor wavelet and SB2DLPP
CN103853724A (en) Multimedia data sorting method and device
CN105678249B (en) For the registered face face identification method different with face picture quality to be identified
CN108765322A (en) Image de-noising method and device
CN107590785A (en) A kind of Brillouin spectrum image-recognizing method based on sobel operators
CN112184672A (en) No-reference image quality evaluation method and system
Frantc et al. Machine learning approach for objective inpainting quality assessment
US9159123B2 (en) Image prior as a shared basis mixture model
Wang et al. A new method estimating linear gaussian filter kernel by image PRNU noise
CN106709915A (en) Image resampling operation detection method
CN102968793B (en) Based on the natural image of DCT domain statistical property and the discrimination method of computer generated image
Kaur et al. Illumination invariant face recognition
Dhillon et al. Edge-preserving image denoising using noise-enhanced patch-based non-local means
CN107423739A (en) Image characteristic extracting method and device
Viacheslav et al. Low-level features for inpainting quality assessment
CN104050452A (en) Facial image illumination removal method based on DCT and partial standardization
CN106650678B (en) Gabor wavelet subband dependency structure face identification method
Kanchana et al. Texture classification using discrete shearlet transform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140917