CN111382601A - Illumination face image recognition preprocessing system and method for generating confrontation network model - Google Patents

Illumination face image recognition preprocessing system and method for generating confrontation network model Download PDF

Info

Publication number
CN111382601A
CN111382601A CN201811619546.XA CN201811619546A CN111382601A CN 111382601 A CN111382601 A CN 111382601A CN 201811619546 A CN201811619546 A CN 201811619546A CN 111382601 A CN111382601 A CN 111382601A
Authority
CN
China
Prior art keywords
training
data set
generating
illumination
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811619546.XA
Other languages
Chinese (zh)
Inventor
王依萍
尹万春
赵威
酒若霖
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Zhongyuan Big Data Research Institute Co ltd
Original Assignee
Henan Zhongyuan Big Data Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Zhongyuan Big Data Research Institute Co ltd filed Critical Henan Zhongyuan Big Data Research Institute Co ltd
Priority to CN201811619546.XA priority Critical patent/CN111382601A/en
Publication of CN111382601A publication Critical patent/CN111382601A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

An illumination face image recognition preprocessing system and method for generating an confrontation network model collects and forms a training data set; preprocessing a training data set according to a training target such as an optimal generated confrontation network model; constructing a generation confrontation network model according to the training target of obtaining the optimal generation confrontation network model; generating a training model of the countermeasure network on a training data set; and verifying the face recognition precision after the illumination pretreatment on the test set by using the trained optimal model for generating the confrontation network. The defects that shadow or halo appears under complex illumination conditions and the workload of a data preparation stage is increased in the prior art are effectively avoided.

Description

Illumination face image recognition preprocessing system and method for generating confrontation network model
Technical Field
The invention relates to the technical field of face recognition, in particular to an illumination face image recognition preprocessing system and method for generating a confrontation network model.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
At present, a face recognition algorithm is widely applied, but the performance of the face recognition algorithm is still influenced by various factors such as illumination, visual angle, shielding, age and the like. Among many influencing factors, the illumination change is one of the key factors influencing the recognition performance of the system, and due to the 3D structure of the face, the shadow cast by the illumination can strengthen or weaken the original face features. Especially at night, the face shadow caused by insufficient light causes a sharp drop in the recognition rate, making it difficult for the system to meet practical requirements. Meanwhile, theories and experiments also prove that the difference of the same individual caused by different illumination is larger than the difference of different individuals under the same illumination. Therefore, if a proper method can be explored to preprocess the face image with over-strong or over-weak illumination, the method has certain significance for improving the performance of the face recognition algorithm.
In recent years, researchers have proposed various preprocessing algorithms to solve the illumination problem in face recognition. The histogram equalization, the edge graph and the extraction of the illumination invariant feature by using the wavelet transform method can meet the real-time requirement, but most of the problems can not solve the shadow problem, the ideal effect is difficult to obtain, and the parameters are more excessively depended on. The proposal of the illumination compensation dictionary achieves good illumination processing effect, but the method needs a training image under strict illumination control. The single-scale Retinex algorithm (SSR), the multi-scale Retinex algorithm (MSR) and the self-quotient image (SQI) in the Retinex theory are widely applied. The common advantages of these Retinex algorithms are that training samples under specific lighting conditions are not required, the recognition rate is high when strong side lighting is absent, but shadows, halos and the like can occur under complex lighting conditions. Deep learning-based methods generally require increasing the illumination diversity of the data set to enable the model to learn as many illumination characteristics as possible, thereby providing greater immunity to various illuminations, but such approaches increase the workload of the data preparation phase.
Disclosure of Invention
In order to solve the problems, the invention provides an illumination face image recognition preprocessing system and method for generating an antagonistic network model, which effectively avoid the defects that the phenomenon of shadow or halo appears under a complex illumination condition and the workload of a data preparation stage is increased in the prior art.
In order to overcome the defects in the prior art, the invention provides a solution of an illumination face image recognition preprocessing system and method for generating an confrontation network model, which comprises the following specific steps:
a method of generating an illuminated face image recognition preprocessing system against a network model, comprising the steps of:
step 1: collecting and forming a training data set;
step 2: preprocessing a training data set according to a training target such as an optimal generated confrontation network model;
and step 3: constructing a generation confrontation network model according to the training target of obtaining the optimal generation confrontation network model;
and 4, step 4: generating a training model of the countermeasure network on a training data set;
and 5: and verifying the face recognition precision after the illumination pretreatment on the test set by using the trained optimal model for generating the confrontation network.
The method for collecting training data set in step 1 includes collecting a large amount of face image data, and dividing them into two data sets: the system comprises a data set A and a data set B, wherein the data set A is a face picture with uneven illumination, and the data set B is a face picture with even illumination.
The training data set is a training set without pairings.
The method for preprocessing the training data set according to the training target in the step 2 comprises the steps of firstly detecting a face frame in each picture through a face detector, then detecting 5 key points of each face through a face key point detector, and finally carrying out affine transformation on the images according to the detected face frame and key points to obtain face images with the same size.
The generation countermeasure network model comprises two generators and two discriminators, wherein the generators are used for generating images to form a generation model, and the generated images are close to the images in the training data set; the discriminator is used for continuously improving the resolving power of the discriminator, and can more accurately judge whether the image comes from the generator or the data set, and the result forms a discrimination model; the generative model and the discrimination model mutually confront a game, so that the generative model has stronger learning ability, and the discrimination model has stronger discrimination ability.
The generator comprises an encoder, a converter and a decoder, wherein the encoder is in communication connection with the converter, and the converter is in communication connection with the decoder;
the encoding method of the encoder comprises the following steps: firstly, extracting features from a face image input from a training data set by using a convolutional neural network, and compressing the face image into 256 feature vectors of 64 × 64 to form a DA domain serving as source data;
the conversion method of the converter comprises the following steps: converting the feature vector of the image in the DA domain into a feature vector in the DB domain as target data by combining dissimilar features in the features extracted from the face image; the converter comprises 6 layers of Reset modules, wherein each Reset module is a neural network layer formed by two convolution layers and can achieve the aim of simultaneously retaining the original image characteristics of a human face image during conversion;
the decoding method of the decoder comprises the following steps: the work of restoring low-level features from the feature vectors in the DB domain is completed by using a deconvolution layer, and the finally obtained face image is an output generated image;
the discriminator is used for randomly taking a face image as input from a face image which is in uneven illumination and is taken as an original image in the training data set and a generated image output by the generator, and trying to predict the face image as the original image in the training data set or the output image of the generator; the discriminator belongs to a convolutional network, and needs to extract features from a face image, and then determines whether the extracted features belong to a specific class by adding a convolutional layer generating one-dimensional output.
The loss functions of the two discriminators are L _ GAN (G, D _ Y, X, Y) and L _ GAN (F, D _ X, Y, X) shown in equations (1) and (2), respectively:
L_GAN(G,D_Y,X,Y)=E_(y~P_data(y))[(D_Y(y)-1)^2]+E_(x~P_data(x))[(〖1-D〗_Y(G(x)))^2](1)
L_GAN(F,D_X,Y,X)=E_(x~P_data(x))[(D_X(x)-1)^2]+E_(y~P_data(y))[(〖1-D〗_X(G(y)))^2](2)
the loss functions of the two generators are summed to L _ cyc (G, F) as shown in equation (3):
L_cyc(G,F)=E_(x~P_data(x))[‖F(G(x))-x‖_1]+E_(y~P_data(y))[‖G(F(y))-y‖_1](3)
the final loss function of the countermeasure network is L (G,', F, D _ X, D, TiOnY) as shown in formula (4): l (G, F, D _ X, D, Y) ═ L _ GAN (G, D _ Y, X, Y) + L _ GAN (F, D _ X, Y, X) + L _ cyc (G, F) (4)
Wherein L is a loss function, X, Y is input face image data, G is X->Generator function of Y, F is Y->Generator function of X, DYIs X->Discriminator function of Y, DXIs Y->Discriminator function of X, E being the cost estimate, PdataRepresenting the distribution of real samples.
The method for training a generative countermeasure network model on a training data set comprises the following steps: the method comprises the steps of setting hyper-parameters of training to be 5000 rounds, the batch size batchsize to be 32 and the learning rate to be 0.001, training face images with uneven illumination in an input data set, respectively calculating loss functions of a generator and an arbiter under a generated confrontation network, training the generator and the arbiter by using a random gradient descent algorithm, and finally obtaining an optimal model.
An illumination face image recognition preprocessing system for generating a confrontation network model comprises a collecting module, a preprocessing module, a constructing module, a training module and a verifying module;
the collection module is used for collecting and forming a training data set;
the preprocessing module is used for preprocessing a training data set according to a training target such as an optimal generation confrontation network model;
the construction module is used for constructing and generating a confrontation network model according to a training target which is optimal for generating the confrontation network model;
the training module is used for training a generative pair network model on a training data set;
the verification module is used for verifying the face recognition precision after illumination pretreatment on a test set by using the trained optimal model for generating the confrontation network;
the system also comprises a processor, a storage, an input and output device and a bus;
the processor, the memory and the input and output equipment are respectively connected with the bus;
the input and output device comprises an image acquisition device, which can be a camera, for collecting face images constituting elements of a training data set;
the memory is used for storing the collection module, the preprocessing module, the construction module, the training module and the verification module;
the processor is configured to execute the collection module, the pre-processing module, the construction module, the training module, and the validation module to perform the steps of:
step 1: collecting and forming a training data set;
step 2: preprocessing a training data set according to a training target such as an optimal generated confrontation network model;
and step 3: constructing a generation confrontation network model according to the training target of obtaining the optimal generation confrontation network model;
and 4, step 4: generating a training model of the countermeasure network on a training data set;
and 5: and verifying the face recognition precision after the illumination pretreatment on the test set by using the trained optimal model for generating the confrontation network.
The invention has the beneficial effects that:
and processing a human face data set with uneven illumination by using the trained model, respectively identifying the unprocessed data set and the processed data set by using a human face identification model, and comparing the identification accuracy before and after illumination processing. Under the condition of one thousandth of error identification rate, the identification precision before illumination processing is 44.1%, and the identification precision after illumination processing by using the method of the invention is 56.7%, so that the experimental result shows that the image processed by generating the countermeasure model is more uniformly illuminated, and the identification precision is greatly improved.
Drawings
FIG. 1 is a flow chart of a method of generating an illuminated face image recognition preprocessing system against a network model of the present invention.
Fig. 2 is a block diagram of a generator in the generative countermeasure network of the present invention.
Fig. 3 is a block diagram of an arbiter in a generative countermeasure network of the present invention.
Detailed Description
The invention will be further described with reference to the following figures and examples.
As shown in fig. 1-3, the method for generating an illumination face image recognition preprocessing system for countering network models comprises the following steps:
step 1: collecting and forming a training data set;
in this step, since a large amount of training data is required for subsequent training to generate the confrontation network model, a large amount of face image data needs to be collected and divided into two data sets: the system comprises a data set A and a data set B, wherein the data set A is a face picture with uneven illumination, and the data set B is a face picture with even illumination.
Step 2: preprocessing a training data set according to a training target such as an optimal generated confrontation network model;
the method for generating the illumination face image recognition preprocessing system for resisting the network model is to train the model on a training set without pairing. The matching of the pictures is equivalent to filtering the features once, the model can easily learn the part needing to be converted, and the face pictures without matching have large amount of data to fully indicate the features, or the trained generator adds abnormal matters in the training set into the generated result. Therefore, prior to training, the training data set needs to be preprocessed: the method comprises the steps of firstly detecting a face frame in each picture through a face detector, then detecting 5 key points of each face through a face key point detector, and finally carrying out affine transformation on the images according to the detected face frame and the key points to obtain face images with the same size.
And step 3: constructing a generation confrontation network model according to the training target of obtaining the optimal generation confrontation network model;
and 4, step 4: generating a training model of the countermeasure network on a training data set;
and 5: and verifying the face recognition precision after the illumination pretreatment on the test set by using the trained optimal model for generating the confrontation network.
The method for collecting training data set in step 1 includes collecting a large amount of face image data, and dividing them into two data sets: the system comprises a data set A and a data set B, wherein the data set A is a face picture with uneven illumination, and the data set B is a face picture with even illumination.
The training data set is a training set without pairings.
The method for preprocessing the training data set according to the training target in the step 2 comprises the steps of firstly detecting a face frame in each picture through a face detector, then detecting 5 key points of each face through a face key point detector, and finally carrying out affine transformation on the images according to the detected face frame and key points to obtain face images with the same size.
The generation countermeasure network model comprises two generators and two discriminators, wherein the generators are used for generating images to form a generation model, and the generated images are close to the images in the training data set; the discriminator is used for continuously improving the resolving power of the discriminator, and can more accurately judge whether the image comes from the generator or the data set, and the result forms a discrimination model; the generative model and the discrimination model mutually confront a game, so that the generative model has stronger learning ability, and the discrimination model has stronger discrimination ability.
The generator comprises an encoder, a converter and a decoder, wherein the encoder is in communication connection with the converter, and the converter is in communication connection with the decoder;
the encoding method of the encoder comprises the following steps: firstly, extracting features from a face image input from a training data set by using a convolutional neural network, and compressing the face image into 256 feature vectors of 64 × 64 to form a DA domain serving as source data;
the conversion method of the converter comprises the following steps: converting the feature vector of the image in the DA domain into a feature vector in the DB domain as target data by combining dissimilar features in the features extracted from the face image; the converter comprises 6 layers of Reset modules, wherein each Reset module is a neural network layer formed by two convolution layers and can achieve the aim of simultaneously retaining the original image characteristics of a human face image during conversion;
the decoding method of the decoder comprises the following steps: the work of restoring low-level features from the feature vectors in the DB domain is completed by using a deconvolution layer, and the finally obtained face image is an output generated image;
the discriminator is used for randomly taking a face image as input from a face image which is in uneven illumination and is taken as an original image in the training data set and a generated image output by the generator, and trying to predict the face image as the original image in the training data set or the output image of the generator; the discriminator belongs to a convolutional network, and needs to extract features from a face image, and then determines whether the extracted features belong to a specific class by adding a convolutional layer generating one-dimensional output.
The loss functions of the two discriminators are L _ GAN (G, D _ Y, X, Y) and L _ GAN (F, D _ X, Y, X) shown in equations (1) and (2), respectively:
L_GAN(G,D_Y,X,Y)=E_(y~P_data(y))[(D_Y(y)-1)^2]+E_(x~P_data(x))[(〖1-D〗_Y(G(x)))^2](1)
L_GAN(F,D_X,Y,X)=E_(x~P_data(x))[(D_X(x)-1)^2]+E_(y~P_data(y))[(〖1-D〗_X(G(y)))^2](2)
the loss functions of the two generators are summed to L _ cyc (G, F) as shown in equation (3):
L_cyc(G,F)=E_(x~P_data(x))[‖F(G(x))-x‖_1]+E_(y~P_data(y))[‖G(F(y))-y‖_1](3)
the final loss function of the countermeasure network is L (G,', F, D _ X, D, TiOnY) as shown in formula (4): l (G, F, D _ X, D, Y) ═ L _ GAN (G, D _ Y, X, Y) + L _ GAN (F, D _ X, Y, X) + L _ cyc (G, F) (4)
Wherein L is a loss function, X, Y is input face image data, G is X->Generator function of Y, F is Y->Generator function of X, DYIs X->Discriminator function of Y, DXIs Y->Discriminator function of X, E being the cost estimate, PdataRepresenting the distribution of real samples.
The method for training a generative countermeasure network model on a training data set comprises the following steps: the method comprises the steps of setting hyper-parameters of training to be 5000 rounds, the batch size batchsize to be 32 and the learning rate to be 0.001, training face images with uneven illumination in an input data set, respectively calculating loss functions of a generator and an arbiter under a generated confrontation network, training the generator and the arbiter by using a random gradient descent algorithm, and finally obtaining an optimal model.
An illumination face image recognition preprocessing system for generating a confrontation network model comprises a collecting module, a preprocessing module, a constructing module, a training module and a verifying module;
the collection module is used for collecting and forming a training data set;
the preprocessing module is used for preprocessing a training data set according to a training target such as an optimal generation confrontation network model;
the construction module is used for constructing and generating a confrontation network model according to a training target which is optimal for generating the confrontation network model;
the training module is used for training a generative pair network model on a training data set;
the verification module is used for verifying the face recognition precision after illumination pretreatment on a test set by using the trained optimal model for generating the confrontation network;
the system also comprises a processor, a storage, an input and output device and a bus;
the processor, the memory and the input and output equipment are respectively connected with the bus;
the input and output device comprises an image acquisition device, which can be a camera, for collecting face images constituting elements of a training data set;
the memory is used for storing the collection module, the preprocessing module, the construction module, the training module and the verification module;
the processor is configured to execute the collection module, the pre-processing module, the construction module, the training module, and the validation module to perform the steps of:
step 1: collecting and forming a training data set;
step 2: preprocessing a training data set according to a training target such as an optimal generated confrontation network model;
and step 3: constructing a generation confrontation network model according to the training target of obtaining the optimal generation confrontation network model;
and 4, step 4: generating a training model of the countermeasure network on a training data set;
and 5: and verifying the face recognition precision after the illumination pretreatment on the test set by using the trained optimal model for generating the confrontation network.
In summary, the present embodiment discloses a method for generating an illumination face image recognition preprocessing system of an confrontation network model, which can significantly improve the illumination effect on the basis of not changing the image structure and characteristics, so that the subsequent face recognition accuracy is greatly improved.
The present invention has been described in terms of embodiments, and it will be understood by those skilled in the art that the present disclosure is not limited to the embodiments described above, and various changes, modifications, and substitutions may be made without departing from the scope of the present invention.

Claims (9)

1. A method for generating an illumination face image recognition preprocessing system for a confrontation network model is characterized by comprising the following steps:
step 1: collecting and forming a training data set;
step 2: preprocessing a training data set according to a training target such as an optimal generated confrontation network model;
and step 3: constructing a generation confrontation network model according to the training target of obtaining the optimal generation confrontation network model;
and 4, step 4: generating a training model of the countermeasure network on a training data set;
and 5: and verifying the face recognition precision after the illumination pretreatment on the test set by using the trained optimal model for generating the confrontation network.
2. The method for generating an illumination facial image recognition preprocessing system for countering network models according to claim 1, characterized in that the method for collecting training data set in step 1 comprises collecting a large amount of facial image data and dividing them into two data sets: the system comprises a data set A and a data set B, wherein the data set A is a face picture with uneven illumination, and the data set B is a face picture with even illumination.
3. The method of generating an illumination face image recognition preprocessing system against network models as claimed in claim 1, characterized in that the training data set is a training set without pairing.
4. The method for generating an illumination human face image recognition preprocessing system for resisting network models according to claim 1, characterized in that the method for preprocessing the training data set according to the training target in step 2 comprises firstly detecting the human face frame in each picture by a human face detector, then detecting 5 key points of each human face by a human face key point detector, and finally carrying out affine transformation on the images according to the detected human face frame and key points to obtain human face images with the same size.
5. The method of generating an illumination human face image recognition preprocessing system of the confrontation network model as claimed in claim 1, characterized in that, the generation of the confrontation network model comprises two generators and two discriminators, wherein the generators are used to generate images to constitute the generation model, and make the generated images approach to the images in the training data set continuously; the discriminator is used for continuously improving the resolving power of the discriminator, and can more accurately judge whether the image comes from the generator or the data set, and the result forms a discrimination model; the generative model and the discrimination model mutually confront a game, so that the generative model has stronger learning ability, and the discrimination model has stronger discrimination ability.
6. The method of generating an illumination human face image recognition preprocessing system against network models as claimed in claim 5, characterized in that the generator comprises an encoder, a converter and a decoder, the encoder and the converter are connected in communication, the converter and the decoder are connected in communication;
the encoding method of the encoder comprises the following steps: firstly, extracting features from a face image input from a training data set by using a convolutional neural network, and compressing the face image into 256 feature vectors of 64 × 64 to form a DA domain serving as source data;
the conversion method of the converter comprises the following steps: converting the feature vector of the image in the DA domain into a feature vector in the DB domain as target data by combining dissimilar features in the features extracted from the face image; the converter comprises 6 layers of Reset modules, wherein each Reset module is a neural network layer formed by two convolution layers and can achieve the aim of simultaneously retaining the original image characteristics of a human face image during conversion;
the decoding method of the decoder comprises the following steps: the work of restoring low-level features from the feature vectors in the DB domain is completed by using a deconvolution layer, and the finally obtained face image is an output generated image;
the discriminator is used for randomly taking a face image as input from a face image which is in uneven illumination and is taken as an original image in the training data set and a generated image output by the generator, and trying to predict the face image as the original image in the training data set or the output image of the generator; the discriminator belongs to a convolutional network, and needs to extract features from a face image, and then determines whether the extracted features belong to a specific class by adding a convolutional layer generating one-dimensional output.
7. The method of generating an illumination human face image recognition preprocessing system for countering network models, according to claim 6, characterized in that the loss functions of the two discriminators are L _ GAN (G, D _ Y, X, Y) and L _ GAN (F, D _ X, Y, X) shown in equation (1) and equation (2), respectively:
L_GAN(G,D_Y,X,Y)=E_(y~P_data(y))[(D_Y(y)-1)^2]+E_(x~P_data(x))[(〖1-D〗_Y(G(x)))^2](1)
L_GAN(F,D_X,Y,X)=E_(x~P_data(x))[(D_X(x)-1)^2]+E_(y~P_data(y))[(〖1-D〗_X(G(y)))^2](2)
the loss functions of the two generators are summed to L _ cyc (G, F) as shown in equation (3):
L_cyc(G,F)=E_(x~P_data(x))[‖F(G(x))-x‖_1]+E_(y~P_data(y))[‖G(F(y))-y‖_1](3)
the final loss function of the countermeasure network is L (G,', F, D _ X, D, TiOnY) as shown in formula (4): l (G, F, D _ X, D, Y) ═ L _ GAN (G, D _ Y, X, Y) + L _ GAN (F, D _ X, Y, X) + L _ cyc (G, F) (4)
Wherein L is a loss function, X, Y is input face image data, G is X->Generator function of Y, F is Y->Generator function of X, DYIs X->Discriminator function of Y, DXIs Y->Discriminator function of X, E being the cost estimate, PdataRepresenting the distribution of real samples.
8. The method of generating an illumination face image recognition preprocessing system for countering network models according to claim 1, characterized in that the method of generating the countering network models on the training data set comprises: the method comprises the steps of setting hyper-parameters of training to be 5000 rounds, the batch size batchsize to be 32 and the learning rate to be 0.001, training face images with uneven illumination in an input data set, respectively calculating loss functions of a generator and an arbiter under a generated confrontation network, training the generator and the arbiter by using a random gradient descent algorithm, and finally obtaining an optimal model.
9. An illumination face image recognition preprocessing system for generating a confrontation network model is characterized by comprising a collecting module, a preprocessing module, a constructing module, a training module and a verifying module;
the collection module is used for collecting and forming a training data set;
the preprocessing module is used for preprocessing a training data set according to a training target such as an optimal generation confrontation network model;
the construction module is used for constructing and generating a confrontation network model according to a training target which is optimal for generating the confrontation network model;
the training module is used for training a generative pair network model on a training data set;
the verification module is used for verifying the face recognition precision after illumination pretreatment on a test set by using the trained optimal model for generating the confrontation network;
the system also comprises a processor, a storage, an input and output device and a bus;
the processor, the memory and the input and output equipment are respectively connected with the bus;
the input and output device comprises an image acquisition device, which can be a camera, for collecting face images constituting elements of a training data set;
the memory is used for storing the collection module, the preprocessing module, the construction module, the training module and the verification module;
the processor is configured to execute the collection module, the pre-processing module, the construction module, the training module, and the validation module to perform the steps of:
step 1: collecting and forming a training data set;
step 2: preprocessing a training data set according to a training target such as an optimal generated confrontation network model;
and step 3: constructing a generation confrontation network model according to the training target of obtaining the optimal generation confrontation network model;
and 4, step 4: generating a training model of the countermeasure network on a training data set;
and 5: and verifying the face recognition precision after the illumination pretreatment on the test set by using the trained optimal model for generating the confrontation network.
CN201811619546.XA 2018-12-28 2018-12-28 Illumination face image recognition preprocessing system and method for generating confrontation network model Pending CN111382601A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811619546.XA CN111382601A (en) 2018-12-28 2018-12-28 Illumination face image recognition preprocessing system and method for generating confrontation network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811619546.XA CN111382601A (en) 2018-12-28 2018-12-28 Illumination face image recognition preprocessing system and method for generating confrontation network model

Publications (1)

Publication Number Publication Date
CN111382601A true CN111382601A (en) 2020-07-07

Family

ID=71214907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811619546.XA Pending CN111382601A (en) 2018-12-28 2018-12-28 Illumination face image recognition preprocessing system and method for generating confrontation network model

Country Status (1)

Country Link
CN (1) CN111382601A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310711A (en) * 2020-03-03 2020-06-19 广东工业大学 Face image recognition method and system based on two-dimensional singular spectrum analysis and EMD fusion
CN111985499A (en) * 2020-07-23 2020-11-24 东南大学 High-precision bridge apparent disease identification method based on computer vision
CN113537057A (en) * 2021-07-14 2021-10-22 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN115936985A (en) * 2022-12-01 2023-04-07 华中光电技术研究所(中国船舶集团有限公司第七一七研究所) Image super-resolution reconstruction method based on high-order degradation cycle generation countermeasure network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068416A1 (en) * 2016-10-14 2018-04-19 广州视源电子科技股份有限公司 Neural network-based multilayer image feature extraction modeling method and device and image recognition method and device
CN108205659A (en) * 2017-11-30 2018-06-26 深圳市深网视界科技有限公司 Face occluder removes and its method, equipment and the medium of model construction
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068416A1 (en) * 2016-10-14 2018-04-19 广州视源电子科技股份有限公司 Neural network-based multilayer image feature extraction modeling method and device and image recognition method and device
CN108205659A (en) * 2017-11-30 2018-06-26 深圳市深网视界科技有限公司 Face occluder removes and its method, equipment and the medium of model construction
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾碧等: "基于CycleGAN的非配对人脸图片光照归一化方法" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310711A (en) * 2020-03-03 2020-06-19 广东工业大学 Face image recognition method and system based on two-dimensional singular spectrum analysis and EMD fusion
CN111985499A (en) * 2020-07-23 2020-11-24 东南大学 High-precision bridge apparent disease identification method based on computer vision
CN113537057A (en) * 2021-07-14 2021-10-22 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN113537057B (en) * 2021-07-14 2022-11-01 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN115936985A (en) * 2022-12-01 2023-04-07 华中光电技术研究所(中国船舶集团有限公司第七一七研究所) Image super-resolution reconstruction method based on high-order degradation cycle generation countermeasure network

Similar Documents

Publication Publication Date Title
CN110147721B (en) Three-dimensional face recognition method, model training method and device
CN108520216B (en) Gait image-based identity recognition method
CN111382601A (en) Illumination face image recognition preprocessing system and method for generating confrontation network model
CN109543548A (en) A kind of face identification method, device and storage medium
Alheeti Biometric iris recognition based on hybrid technique
CN106919921B (en) Gait recognition method and system combining subspace learning and tensor neural network
CN110956082B (en) Face key point detection method and detection system based on deep learning
Meng et al. Finger vein recognition based on convolutional neural network
CN110570443A (en) Image linear target extraction method based on structural constraint condition generation model
Lakshmi et al. Off-line signature verification using Neural Networks
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
Althabhawee et al. Fingerprint recognition based on collected images using deep learning technology
Yuan et al. A review of recent advances in ear recognition
CN117132461B (en) Method and system for whole-body optimization of character based on character deformation target body
Bharadi et al. Multi-instance iris recognition
Achban et al. Wrist hand vein recognition using local line binary pattern (LLBP)
CN109815887B (en) Multi-agent cooperation-based face image classification method under complex illumination
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
Karungaru et al. Face recognition in colour images using neural networks and genetic algorithms
Teng et al. Unimodal face classification with multimodal training
Deepika et al. Invariant feature extraction from fingerprint biometric using pseudo Zernike moments
CN111428670B (en) Face detection method, face detection device, storage medium and equipment
Geetha et al. 3D face recognition using Hadoop
CN114299586A (en) Intelligent deep learning system based on convolutional neural network
Deng et al. Multi-stream face anti-spoofing system using 3D information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination