CN111292407B - Face illumination normalization method based on convolution self-coding network - Google Patents

Face illumination normalization method based on convolution self-coding network Download PDF

Info

Publication number
CN111292407B
CN111292407B CN202010102138.8A CN202010102138A CN111292407B CN 111292407 B CN111292407 B CN 111292407B CN 202010102138 A CN202010102138 A CN 202010102138A CN 111292407 B CN111292407 B CN 111292407B
Authority
CN
China
Prior art keywords
frequency
picture
face
omega
org
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010102138.8A
Other languages
Chinese (zh)
Other versions
CN111292407A (en
Inventor
达飞鹏
李春露
王辰星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010102138.8A priority Critical patent/CN111292407B/en
Publication of CN111292407A publication Critical patent/CN111292407A/en
Application granted granted Critical
Publication of CN111292407B publication Critical patent/CN111292407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a face illumination normalization method based on a convolution self-encoder, which aims to improve the face quality and improve the face recognition accuracy. The method comprises the following steps: generating face images under different illumination by using a Cook-Torrance illumination model, and training a self-coding network to enable the self-coding network to normalize an input face to a front and uniform illumination environment; in order to recover the loss of detail caused by the network, analysis and high-frequency information extraction on the frequency domain are carried out: performing Gaussian blurring on the original image to enable the blurred result to be the same as the blurring degree of network output, and ensuring that the missing degree of the blurred face and the blurring degree of the result output by the network are the same on high-frequency information; then, comparing the blurred picture with the original picture to find out the frequency band where the details are located; and finally, extracting high-frequency information in the original picture, and combining the high-frequency information with the normalized picture in a frequency domain to obtain a final result. The method has strong robustness, and can obtain the illumination normalization result with high fineness.

Description

Face illumination normalization method based on convolution self-coding network
Technical Field
The invention belongs to the technical field of face preprocessing, particularly relates to a face illumination normalization method based on a convolution self-coding network, and particularly relates to a face illumination normalization method which utilizes the convolution self-coding network to carry out illumination preprocessing and utilizes a frequency domain analysis method to extract and recover detailed features.
Background
In modern society, identification and verification of personal identity are very important in many situations, such as passenger authentication in railway stations and airports, access systems in residential communities, user authentication of smart devices, and the like. Compared with the characteristics of irises, fingerprints and the like, the human face is the most user-friendly biological characteristic in the human face due to the non-contact property, so that the human face recognition has a wider application prospect. However, in an actual system, especially outdoors, the appearance of the human face can be changed greatly due to illumination, which causes great adverse effect on the accuracy of the human face recognition and hinders the application development of the human face recognition.
And (4) face illumination normalization, namely performing illumination normalization on the face picture under any illumination condition to obtain the face picture under the condition of uniform illumination on the front side. The task has great promotion effect on improving the accuracy of face recognition and verification.
Disclosure of Invention
The technical problem is as follows:
in order to overcome the influence of uneven illumination on face recognition, the invention provides an illumination normalization method based on a convolution self-coding network, and the illumination normalization method has higher robustness.
The technical scheme is as follows:
in order to achieve the purpose, the invention adopts the technical scheme that:
an illumination normalization method based on a convolution self-coding network comprises the following steps:
step 1: generating face data samples under different illuminations by using the three-dimensional face data and the illumination model, and performing training of a convolution self-coding network by using the generated face data samples under different illuminations as input and using the corresponding three-dimensional face data as a supervision signal;
step 2: inputting the original picture to be normalized into the convolutional self-coding network trained in the step 1 to obtain a picture after primary normalization;
step 3, blurring the original picture to be normalized by using different Gaussian kernels to obtain a group of blurred samples with different blurring degrees, respectively comparing the blurred samples with the picture after the initial normalization, and taking the blurred sample with the blurring degree closest to the picture after the initial normalization as a reference picture;
and 4, step 4: comparing the reference picture with the original picture to be normalized on the frequency domain, and finding out the frequency band where the high-frequency detail information lost by the normalization result output by the convolutional self-coding network is located, wherein the frequency band specifically comprises the following steps:
step 4.1: respectively normalizing the original pictures I to be normalized by using DCT (discrete cosine transform) org Picture I after preliminary normalization CAE And reference picture I ref Converting the spatial domain into the frequency domain to obtain a corresponding DCT matrix C org 、C CAE And C ref
Step 4.2: respectively converting C according to the sequence of frequency from low to high org 、C CAE And C ref Mapping as a one-dimensional vector c org (ω)、c CAE (omega) and c ref (ω);
Step 4.3: in the frequency domain, for each frequency ω, C is calculated org And C ref Error of the DCT coefficients, the error being defined as:
Figure BDA0002387219230000021
step 4.4: smoothing D (omega), and defining the boundary frequency of the high-frequency detail information and the low-frequency information as follows:
Figure BDA0002387219230000022
wherein alpha is a constant and has a value range of 0.1-0.2; (·) "is the second derivative of the function" · ";
step 4.5: defining the frequency band where the high-frequency detail information is located as [ omega ] b ,M×N]Wherein M × N is I org The size of (d);
and 5: and fusing the high-frequency component of the original picture to be normalized and the low-frequency component of the picture after the initial normalization on a frequency domain and transforming the fused components to a space domain to obtain an illumination normalization result of the original picture to be normalized.
Further, in the step 3, the fuzzy sample and the preliminarily normalized picture are compared by using a picture quality assessment index SSIM.
Further, the step 3 specifically includes the following steps:
step 3.1: change the size and standard deviation of the Gaussian kernel to the size ofMxN original picture I to be normalized org Carrying out fuzzy to obtain N fuzzy samples
Figure BDA0002387219230000023
Step 3.2: using SSIM to compare preliminarily normalized pictures I CAE And the quality similarity of N fuzzy samples to find out the similarity with I CAE Taking a fuzzy sample with the highest similarity as a reference sample I ref
Further, the step 5 specifically includes the following steps:
step 5.1: extraction of c org (omega) lies in the frequency band [ omega ] b ,M×N]High frequency detail information of (1), denoted as c orgH (ii) a Extraction of c CAE (omega) lies in the frequency band [1, omega ] b ]Information of (2), noted as c CAEL (ii) a C is to be orgH And c CAEL Fusing in frequency domain to obtain one-dimensional vector c out (ω)=[c CAEL ,c orgH ];
Step 5.2: one-dimensional vector c out (omega) is restored to the two-dimensional space to obtain a corresponding DCT coefficient matrix C out (omega), then DCT inverse transformation is carried out to obtain the final illumination normalization result I out
Has the advantages that:
the method can effectively carry out illumination normalization on the face picture under any illumination condition, simultaneously reserves the details of the face, and has great promotion effect on subsequent face recognition.
Drawings
FIG. 1 is an overall flowchart of face illumination normalization based on a convolutional self-coding network provided in the present invention;
FIG. 2 is a training process for a network as used herein;
FIG. 3 is a sequence of a DCT matrix as projected onto a one-dimensional vector;
FIG. 4 is a graph of experimental results before and after light normalization of a portion of a sample of an AR database using the methods described herein;
FIG. 5 shows experimental results before and after illumination normalization of a sample from the Extended Yale B database fraction using the methods described herein.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The invention discloses an illumination normalization method based on a convolution self-coding network, which comprises the following steps:
step 1: generating face data samples under different illuminations by using the three-dimensional face data and the illumination model, and performing training of a convolution self-coding network by using the generated face data samples under different illuminations as input and using the corresponding three-dimensional face data as a supervision signal;
and 2, step: inputting the original picture to be normalized into the convolutional self-coding network trained in the step 1 to obtain a picture after primary normalization;
step 3, blurring the original picture to be normalized by using different Gaussian kernels to obtain a group of blurred samples with different blurring degrees, respectively comparing the blurred samples with the picture after the initial normalization, and taking the blurred sample with the blurring degree closest to the picture after the initial normalization as a reference picture;
step 3.1: changing the size and standard deviation of the Gaussian kernel, and normalizing the original picture I with the size of M multiplied by N org Carrying out fuzzy to obtain N fuzzy samples
Figure BDA0002387219230000031
Step 3.2: using SSIM to compare preliminary normalized pictures I CAE And the quality similarity of N fuzzy samples to find out the similarity with I CAE Taking a fuzzy sample with the highest similarity as a reference sample I ref
And 4, step 4: comparing the reference picture with the original picture to be normalized on a frequency domain, and finding out a frequency band where high-frequency detail information lost by a normalization result output by the convolutional self-coding network is located, wherein the frequency band specifically comprises the following steps:
step 4.1: respectively normalizing the original pictures I to be normalized by using DCT (discrete cosine transform) org And the picture I after preliminary normalization CAE And reference picture I ref Converting from spatial domain to frequency domain to obtain correspondenceDCT matrix C org 、C CAE And C ref
Step 4.2: respectively converting C according to the sequence of frequency from low to high org 、C CAE And C ref Mapping as a one-dimensional vector c org (ω)、c CAE (omega) and c ref (ω);
Step 4.3: in the frequency domain, for each frequency ω, C is calculated org And C ref Error of the DCT coefficients, the error being defined as:
Figure BDA0002387219230000041
step 4.4: smoothing D (omega), and defining the boundary frequency of the high-frequency detail information and the low-frequency information as follows:
Figure BDA0002387219230000042
wherein alpha is a constant and has a value range of 0.1-0.2; (·) "is the second derivative of the function" · ";
step 4.5: defining the frequency band of high-frequency detail information as [ omega ] b ,M×N]Wherein M × N is I org The size of (d);
and 5: fusing the high-frequency component of the original picture to be normalized and the low-frequency component of the picture after the initial normalization on a frequency domain and transforming the fused components into a space domain to obtain an illumination normalization result of the original picture to be normalized;
step 5.1: extraction of c org (omega) lies in the frequency band [ omega ] b ,M×N]High frequency detail information of (1), denoted as c orgH (ii) a Extraction of c CAE (omega) lies in the frequency band [1, omega ] b ]Information of (2), noted as c CAEL (ii) a C is to orgH And c CAEL Fusing in frequency domain to obtain one-dimensional vector c out (ω)=[c CAEL ,c orgH ];
And step 5.2: one-dimensional vector c out (omega) is restored to the two-dimensional space to obtain a corresponding DCT coefficient matrix C out (ω),Then DCT inverse transformation is carried out to obtain the final illumination normalization result I out
Examples
The invention discloses a face illumination normalization method based on a convolution self-coding network. In a Windows operating system, firstly, training a network by using a deep learning framework Tensorflow based on a python language; the calculations in the frequency domain are performed on the Matlab platform. And generating data by using three-dimensional data of a Bosphorus database and a Cook-Torrance illumination model, and training the network. The test data adopts a human face picture without shielding in an AR database and an Extended Yale B database. The AR database sample is shown in the first row of FIG. 4, and the Extended Yale B database sample is shown in the first row of FIG. 5. All face picture sizes were normalized to 100 x 100.
FIG. 1 is an overall flow chart of the method of the present invention, and the specific steps are as follows:
step 1: the face pictures generated by a Cook-Torrance illumination model under different illumination conditions are used as the input of a convolution self-coding network, the three-dimensional data of a corresponding Bosphorus database is used as a truth sample (a supervision signal), the error between the truth value and the output of the convolution self-coding network is calculated, the return is carried out, the parameters of the convolution self-coding network are updated, the convolution self-coding network can carry out preliminary normalization on illumination, and the output results of the convolution self-coding network are respectively shown in the second row of fig. 4 and fig. 5.
Step 2: and (3) blurring the original picture to be normalized by using different low-pass Gaussian kernels to obtain a group of blurred samples with different blurring degrees. Then, quality comparison is carried out on the fuzzy samples and the preliminarily normalized picture respectively by using a picture quality assessment index SSIM, a sample which is the most similar to the picture fuzzy degree after preliminary normalization is found from the group of fuzzy samples and is used as a reference picture, and the specific steps are as follows:
step 2.1: setting a Gaussian kernel with the size range of 2 multiplied by 2 to 7 multiplied by 7 and the standard deviation of 0.2 to 0.5, and aiming at an original picture I with the size of M multiplied by N org Blurring to obtain a group of module samples;
step 2.2: comparing the preliminarily normalized pictures I by using SSIM index CAE And the quality similarity of the fuzzy samples, and finding out the most similar reference picture I ref
And step 3: comparison I ref And I org Finding out the frequency band where the high-frequency detail information lost by the network output is located;
step 3.1: respectively transforming I with DCT org 、I CAE And I ref Converting from spatial domain to frequency domain to obtain corresponding DCT matrix C org 、C CAE And C ref
Step 3.2: respectively combining two-dimensional matrices C org 、C CAE And C ref Mapping into a one-dimensional vector c in the order of the arrows in FIG. 3 org (ω)、c CAE (omega) and c ref (ω);
Step 3.3: in the frequency domain, for each frequency ω, C is calculated org And C ref Error of the DCT coefficients, the error being defined as:
Figure BDA0002387219230000051
step 3.4: smoothing the discrete function D (omega), and defining the boundary frequency of the high-frequency detail information and the low-frequency information as follows:
Figure BDA0002387219230000052
wherein, the value of alpha is 0.15 to achieve the best effect;
step 3.5: defining high frequency detail components as lying in the frequency band [ omega ] b ,M×N]High frequency components in (b).
And 4, step 4: will I org High frequency component of (a) and (b) CAE And fusing the low-frequency components in a frequency domain to obtain a final result.
Step 4.1: extraction of c org (omega) lies in the frequency band [ omega ] b ,M×N]High frequency detail information of (1), denoted as c orgH (ii) a Extraction of c CAE (omega) lies in the frequency band [1, omega ] b ]Is marked as c CAEL (ii) a And are fused in the frequency domain to obtain c out (ω)=[c CAEL ,c orgH ];
Step 4.2: one-dimensional vector c out (omega) is restored to a two-dimensional space to obtain a DCT coefficient matrix C out (omega), then DCT inverse transformation is carried out to obtain a final result graph I out As shown in the third row of fig. 4 and 5.
Experiments verify that the method has effectiveness:
for the Extended Yale B database, a face recognition experiment was performed. The Extended Yale B database is divided into five subsets according to the illumination angles, and the illumination angles are respectively 0-12 degrees, 13-25 degrees, 26-50 degrees, 51-77 degrees and more than 77 degrees. Taking the first subset as a reference sample, and respectively comparing the other four subsets pixel by pixel. The RIRR accuracy rates were calculated to obtain recognition rates of 99.56%, 94.17%, and 67, respectively. 32% and 83.24%. The total recognition was 85.06%. The recognition accuracy before the illumination normalization is 67.11%, 11.21%, 5.7% and 2.38%, and the total recognition rate is 21.60%. Therefore, the method provided by the invention can normalize the illumination well and improve the accuracy of subsequent face recognition.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (4)

1. A face illumination normalization method based on a convolution self-coding network is characterized by comprising the following steps: the method comprises the following steps:
step 1: generating face data samples under different illuminations by using the three-dimensional face data and the illumination model, and performing training of a convolution self-coding network by using the generated face data samples under different illuminations as input and using the corresponding three-dimensional face data as a supervision signal;
and 2, step: inputting an original picture to be normalized into the convolutional self-coding network trained in the step (1) to obtain a picture after preliminary normalization;
step 3, blurring the original picture to be normalized by using different Gaussian kernels to obtain a group of blurred samples with different blurring degrees, respectively comparing the blurred samples with the picture after the initial normalization, and taking the blurred sample with the blurring degree closest to the picture after the initial normalization as a reference picture;
and 4, step 4: comparing the reference picture with the original picture to be normalized on a frequency domain, and finding out a frequency band where high-frequency detail information lost by a normalization result output by the convolutional self-coding network is located, wherein the frequency band specifically comprises the following steps:
step 4.1: respectively normalizing the original pictures I to be normalized by using DCT (discrete cosine transform) org Picture I after preliminary normalization CAE And reference picture I ref Converting the spatial domain into the frequency domain to obtain a corresponding DCT matrix C org 、C CAE And C ref
Step 4.2: respectively converting C according to the sequence of the frequency from low to high org 、C CAE And C ref Mapping as a one-dimensional vector c org (ω)、c CAE (omega) and c ref (ω);
Step 4.3: in the frequency domain, for each frequency ω, C is calculated org And C ref The error of the DCT coefficients, defined as:
Figure FDA0002387219220000011
step 4.4: smoothing D (omega), and defining the boundary frequency of the high-frequency detail information and the low-frequency information as follows:
Figure FDA0002387219220000012
wherein alpha is a constant and has a value range of 0.1-0.2; (·) "is the second derivative of the function" · ";
step 4.5: defining the frequency band where the high-frequency detail information is located as [ omega ] b ,M×N]Wherein M × N is I org The size of (d);
and 5: and fusing the high-frequency component of the original picture to be normalized and the low-frequency component of the picture after the initial normalization on a frequency domain and transforming the fused component to a space domain to obtain an illumination normalization result of the original picture to be normalized.
2. The face illumination normalization method based on the convolutional self-coding network as claimed in claim 1, characterized in that: and in the step 3, comparing the fuzzy sample with the preliminarily normalized picture by using a picture quality assessment index SSIM.
3. The face illumination normalization method based on the convolutional self-coding network as claimed in claim 2, characterized in that: the step 3 specifically comprises the following steps:
step 3.1: changing the size and standard deviation of the Gaussian kernel, and normalizing the original picture I with the size of M multiplied by N org Carrying out fuzzy to obtain N fuzzy samples
Figure FDA0002387219220000021
Step 3.2: using SSIM to compare preliminary normalized pictures I CAE And the quality similarity of N fuzzy samples to find out the similarity with I CAE Taking a fuzzy sample with the highest similarity as a reference sample I ref
4. The face illumination normalization method based on the convolutional self-coding network as claimed in claim 1, characterized in that: the step 5 specifically comprises the following steps:
step 5.1: extraction of c org (omega) lies in the frequency band [ omega ] b ,M×N]High frequency detail information of (1), denoted as c orgH (ii) a Extraction of c CAE (omega) lies in the frequency band [1, omega ] b ]Is marked as c CAEL (ii) a C is to orgH And c CAEL Fusing in frequency domain to obtain one-dimensional vector c out (ω)=[c CAEL ,c orgH ];
Step 5.2: one-dimensional vector c out (ω)Restoring to two-dimensional space to obtain corresponding DCT coefficient matrix C out (omega), then DCT inverse transformation is carried out to obtain the final illumination normalization result I out
CN202010102138.8A 2020-02-19 2020-02-19 Face illumination normalization method based on convolution self-coding network Active CN111292407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010102138.8A CN111292407B (en) 2020-02-19 2020-02-19 Face illumination normalization method based on convolution self-coding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010102138.8A CN111292407B (en) 2020-02-19 2020-02-19 Face illumination normalization method based on convolution self-coding network

Publications (2)

Publication Number Publication Date
CN111292407A CN111292407A (en) 2020-06-16
CN111292407B true CN111292407B (en) 2022-11-18

Family

ID=71026845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010102138.8A Active CN111292407B (en) 2020-02-19 2020-02-19 Face illumination normalization method based on convolution self-coding network

Country Status (1)

Country Link
CN (1) CN111292407B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444604A (en) * 2018-12-13 2019-03-08 武汉理工大学 A kind of DC/DC converter method for diagnosing faults based on convolutional neural networks
CN109903292A (en) * 2019-01-24 2019-06-18 西安交通大学 A kind of three-dimensional image segmentation method and system based on full convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10025976B1 (en) * 2016-12-28 2018-07-17 Konica Minolta Laboratory U.S.A., Inc. Data normalization for handwriting recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444604A (en) * 2018-12-13 2019-03-08 武汉理工大学 A kind of DC/DC converter method for diagnosing faults based on convolutional neural networks
CN109903292A (en) * 2019-01-24 2019-06-18 西安交通大学 A kind of three-dimensional image segmentation method and system based on full convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于CycleGAN的非配对人脸图片光照归一化方法;曾碧等;《广东工业大学学报》;20180718(第05期);全文 *

Also Published As

Publication number Publication date
CN111292407A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
US9449432B2 (en) System and method for identifying faces in unconstrained media
Xie et al. Normalization of face illumination based on large-and small-scale features
CN108564061B (en) Image identification method and system based on two-dimensional pivot analysis
Vishwakarma et al. An efficient hybrid DWT-fuzzy filter in DCT domain based illumination normalization for face recognition
CN113592923B (en) Batch image registration method based on depth local feature matching
Hu et al. Single sample face recognition under varying illumination via QRCP decomposition
Liu et al. CAS: Correlation adaptive sparse modeling for image denoising
CN111126169B (en) Face recognition method and system based on orthogonalization graph regular nonnegative matrix factorization
Huang et al. Human emotion recognition based on face and facial expression detection using deep belief network under complicated backgrounds
Kaur et al. Illumination invariant face recognition
CN111292407B (en) Face illumination normalization method based on convolution self-coding network
CN111488811A (en) Face recognition method and device, terminal equipment and computer readable medium
Bychkov et al. Development of Information Technology for Person Identification in Video Stream.
Hamidi et al. Local selected features of dual‐tree complex wavelet transform for single sample face recognition
Krupiński et al. Binarization of degraded document images with generalized Gaussian distribution
CN112380966B (en) Monocular iris matching method based on feature point re-projection
CN110443255B (en) Image recognition method for image feature extraction
Li et al. Shadow determination and compensation for face recognition
CN110555792B (en) Image tampering blind detection method based on normalized histogram comprehensive feature vector
Iqbal et al. Illumination normalization of face images using layers extraction and histogram processing
Zou et al. An OCaNet model based on octave convolution and attention mechanism for iris recognition
Mansour Iris recognition using gauss laplace filter
Kuban et al. A NOVEL MODIFICATION OF SURF ALGORITHM FOR FINGERPRINT MATCHING.
Ma et al. Feature extraction method for lip-reading under variant lighting conditions
CN114049668B (en) Face recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant