CN110765843B - Face verification method, device, computer equipment and storage medium - Google Patents
Face verification method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110765843B CN110765843B CN201910827470.8A CN201910827470A CN110765843B CN 110765843 B CN110765843 B CN 110765843B CN 201910827470 A CN201910827470 A CN 201910827470A CN 110765843 B CN110765843 B CN 110765843B
- Authority
- CN
- China
- Prior art keywords
- reticulate
- reticulate pattern
- training
- loss value
- face image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face verification method, a face verification device, computer equipment and a storage medium, and relates to the technical field of artificial intelligence. The face verification method comprises the following steps: acquiring a facial image with reticulate patterns and a facial image to be compared without reticulate patterns; extracting reticulate pattern position information in the facial image with reticulate pattern; inputting the reticulate pattern position information and the reticulate pattern face image into a reticulate pattern removing model to obtain a reticulate pattern removing face image; inputting the anilox-removed face image into a feature extraction model to obtain anilox-removed face image features; inputting the facial image to be compared without reticulate patterns into a feature extraction model to obtain the features of the facial image to be compared; and calculating the similarity between the feature of the anilox-removed face image and the feature of the face image to be compared, and if the similarity is larger than a preset threshold, successfully verifying the face. By adopting the face verification method, the reticulate patterns in the facial image with the reticulate patterns can be removed, and the accuracy of face verification is effectively improved.
Description
[ field of technology ]
The present invention relates to the field of artificial intelligence technologies, and in particular, to a face verification method, a face verification device, a computer device, and a storage medium.
[ background Art ]
With the rapid development of internet technology, it is also important to ensure the security of user account through face authentication. Face verification is a branch of the face recognition field, and two face photos can be fully automatically verified by using a face verification algorithm to judge whether the two face photos are the same person. This approach may be used for user face identity verification in many scenarios, such as internet finance.
At present, in order to protect privacy safety of citizens, reticulate watermark is added when the photo is transmitted externally, so that a reticulate photo, such as a certificate photo of identity card face photo, social security card face photo, pass card face photo and the like, is obtained. When the facial verification is carried out on the reticulate pattern photo, a professional is required to remove reticulate patterns from the reticulate pattern photo by using a denoising algorithm, then the photo after the reticulate patterns are removed is repaired, and the like, and finally the reticulate pattern photo can be verified.
The existing image processing mode for removing the reticulate patterns has high professional technical requirements on operators, so that the accuracy of face verification is low.
[ invention ]
In view of the above, the embodiments of the present invention provide a face verification method, a device, a computer apparatus, and a storage medium, so as to solve the problem that the accuracy of the existing anilox image is not high when the face verification is performed.
In a first aspect, an embodiment of the present invention provides a face verification method, including:
acquiring a facial image with reticulate patterns and a facial image to be compared without reticulate patterns;
extracting reticulate pattern position information in the reticulate pattern face image by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference training;
inputting the reticulate pattern position information and the reticulate pattern face image into a reticulate pattern removing model to obtain a reticulate pattern removing face image, wherein the reticulate pattern removing model is obtained by adopting a generating type countermeasure network training;
inputting the anilox-removed face image into a feature extraction model to obtain anilox-removed face image features, wherein the feature extraction model is obtained by adopting convolutional neural network training;
inputting the facial image to be compared without reticulate patterns into the feature extraction model to obtain the features of the facial image to be compared;
and calculating the similarity between the anilox-removed face image features and the face image features to be compared, and if the similarity is larger than a preset threshold, successful face verification is achieved.
In accordance with the above aspect and any one of the possible implementations, there is further provided an implementation, before the extracting the texture position information in the textured face image using the texture extraction model, the method further includes:
Acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern face training image of the same person and a corresponding non-reticulate pattern face training image;
reading pixel values in the anilox face training image and the non-anilox face training image, and normalizing the pixel values to be within a [0,1] interval;
adopting the normalized pixel value of the face training image with reticulate patterns, correspondingly subtracting the normalized pixel value of the face training image without reticulate patterns based on the pixel distribution position, taking the absolute value of the difference value as a pixel difference value, taking the part of the pixel difference value smaller than a preset critical value as 0 and taking the part not smaller than the preset critical value as 1 to obtain tag reticulate pattern position information;
calculating the loss generated in the training process of the deep neural network model through a loss function according to the output of the pre-acquired deep neural network model and the tag reticulate pattern position information, and updating network parameters of the deep neural network model through the loss to obtain the reticulate pattern extraction model, wherein the loss function is expressed asWherein n represents the total number of pixels, x i An ith pixel value, y, representing the output of the deep neural network model i Representing the ith pixel value at the label screen location.
In accordance with the aspects and any possible implementation manner described above, there is further provided an implementation manner, where the generating type countermeasure network includes a generator and a discriminator, and before the inputting the anilox position information and the anilox face image into the anilox model, the method further includes:
obtaining a training sample with reticulate patterns and a training sample without reticulate patterns, wherein the number of the training samples with reticulate patterns is equal to that of the training samples without reticulate patterns;
extracting reticulate pattern position information of the reticulate pattern training sample by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the training sample without reticulate pattern;
inputting the simulated face image and the training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by the generator in the training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process;
Carrying out arithmetic addition on the first generated loss value and the first discrimination loss value to obtain a second generated loss value, and updating the network parameters of the generator by adopting the second generated loss value;
and updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
In accordance with the aspects and any possible implementation manner described above, there is further provided an implementation manner, where the generating type countermeasure network includes a generator and a discriminator, and before the inputting the anilox position information and the anilox face image into the anilox model, the method further includes:
obtaining a training sample with reticulate patterns and a training sample without reticulate patterns, wherein the number of the training samples with reticulate patterns is equal to that of the training samples without reticulate patterns;
extracting reticulate pattern position information of the reticulate pattern training sample by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the training sample without reticulate pattern;
Inputting the simulated face image and the training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss caused by the generator in the training process, and the second discrimination loss value is the loss caused by the discriminator in the training process;
inputting the simulated face image into the feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features without reticulate pattern training samples extracted by the extracted feature extraction model;
carrying out arithmetic addition on the simulation loss value, the first generated loss value and the first discrimination loss value to obtain a third generated loss value;
carrying out arithmetic addition on the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
updating network parameters of the generator according to the third generation loss value;
updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the inputting the descreened face image into a feature extraction model, obtains descreened face image features, and the method further includes:
initializing a convolutional neural network, wherein the weight initialized by the convolutional neural network meets the following conditionsWherein n is l Representing the number of samples input at the first layer of the convolutional neural network, S () representing the variance operation,W l Weight representing layer I of the convolutional neural network,/->Representing arbitrary, l representing a first layer of the convolutional neural network;
acquiring a training sample without reticulate patterns;
inputting the training sample without reticulate patterns into the initialized convolutional neural network to obtain a loss value generated in the training process;
and updating network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
In a second aspect, an embodiment of the present invention provides a face verification apparatus, including:
the image to be compared acquisition module is used for acquiring the facial image to be compared with the anilox and the facial image without the anilox;
the reticulate pattern position information extraction module is used for extracting reticulate pattern position information in the facial image with reticulate patterns by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference training;
The anilox-removing face image acquisition module is used for inputting the anilox position information and the anilox-removing face image into an anilox-removing model to obtain an anilox-removing face image, wherein the anilox-removing model is obtained by adopting a generated type countermeasure network training;
the characteristic extraction module is used for inputting the anilox-removed face image into a characteristic extraction model to obtain the characteristic of the anilox-removed face image, wherein the characteristic extraction model is obtained by adopting convolutional neural network training;
the to-be-compared face image feature acquisition module is used for inputting the to-be-compared face image without reticulate patterns into the feature extraction model to obtain to-be-compared face image features;
and the verification module is used for calculating the similarity between the anilox-removed face image characteristics and the face image characteristics to be compared, and when the similarity is larger than a preset threshold, the face verification is successful.
In a third aspect, a computer device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the face verification method described above when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium comprising: and a computer program which, when executed by the processor, implements the steps of the face verification method described above.
In the embodiment of the invention, firstly, a textured face image and a non-textured face image to be compared are obtained, a textured extraction model is adopted to extract textured position information in the textured face image, the textured position information and the textured face image are input into a texture removal model to obtain a texture removal face image, and the texture of the textured face image can be accurately removed by utilizing the extracted textured position information according to the simulation function of a generated type countermeasure network to generate the texture removal face image; then, respectively extracting the characteristics of the anilox-removed face image and the face image characteristics of the face image to be compared without anilox by adopting a characteristic extraction model; and finally, confirming a face verification result by calculating the similarity between the anilox-removed face image characteristics and the face image characteristics to be compared. According to the embodiment of the invention, the reticulate patterns of the facial image with the reticulate patterns can be accurately removed, and the accuracy of facial verification is effectively improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a face verification method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a face verification apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer device in accordance with an embodiment of the present invention.
[ detailed description ] of the invention
For a better understanding of the technical solution of the present invention, the following detailed description of the embodiments of the present invention refers to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one of the same fields describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe the preset ranges, etc. in the embodiments of the present invention, these preset ranges should not be limited to these terms. These terms are only used to distinguish one preset range from another. For example, a first preset range may also be referred to as a second preset range, and similarly, a second preset range may also be referred to as a first preset range without departing from the scope of embodiments of the present invention.
Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection". Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
Fig. 1 shows a flowchart of the face verification method in the present embodiment. The face verification method can be applied to a face verification system, and can be used for verification in face verification of a textured face image. The face verification system can be particularly applied to computer equipment, wherein the computer equipment can be used for carrying out man-machine interaction with a user, and the equipment comprises but is not limited to computers, smart phones, tablets and the like. As shown in fig. 1, the face verification method includes the following steps:
s10: and acquiring the facial image with the reticulate pattern and the facial image to be compared without the reticulate pattern.
The facial images to be compared with the reticulate patterns and the facial images without the reticulate patterns are used for carrying out facial verification, and whether the sources of the facial images of the two facial images are from the same facial is judged.
S20: and extracting reticulate pattern position information in the reticulate pattern face image by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference training.
It can be appreciated that the textured face image and the non-textured face image to be compared cannot be directly compared and verified. Due to the interference of reticulate patterns in the human face image to be reticulated patterns, the similarity between the characteristics of the images is calculated, and the influence is great.
In one embodiment, a reticulate pattern extraction model is specifically used to extract reticulate pattern position information in the reticulate pattern face image so as to remove interference caused by reticulate patterns in the reticulate pattern face image.
S30: inputting the reticulate pattern position information and the reticulate pattern face image into a reticulate pattern removing model to obtain the reticulate pattern removing face image, wherein the reticulate pattern removing model is obtained by adopting a generating type countermeasure network training.
The generated countermeasure network (GAN, generative Adversarial Networks) is a deep learning model, and is one of methods of unsupervised learning on complex distribution. The model is played with each other through the generator and the discriminator, and learning produces an output quite close to that expected by people. It will be appreciated that the generative antagonism network is in effect constantly updating the model that optimizes itself based on the game between the generator and the arbiter.
In an embodiment, the reticulate pattern position information and the reticulate pattern face image are input into the reticulate pattern removing model, a simulated face image can be generated according to the reticulate pattern removing model obtained through the generation type anti-network training, the simulated face image achieves the reticulate pattern removing function according to the input reticulate pattern position, the reticulate pattern removing face image with high image reducibility can be output according to the reticulate pattern position information and the reticulate pattern face image, and the accuracy of face verification is improved.
S40: inputting the anilox-removed face image into a feature extraction model to obtain the characteristics of the anilox-removed face image, wherein the feature extraction model is obtained by adopting convolutional neural network training.
S50: and inputting the facial image to be compared without reticulate patterns into a feature extraction model to obtain the features of the facial image to be compared.
It can be understood that whether the two images come from the same face or not can be verified, and the feature extraction model obtained by convolutional neural network training can be used for extracting the deep features of the images, so that the accuracy of face verification can be ensured, and the face verification efficiency can be remarkably improved.
S60: and calculating the similarity between the feature of the anilox-removed face image and the feature of the face image to be compared, and if the similarity is larger than a preset threshold, successfully verifying the face.
Specifically, the similarity calculation may have various similarity comparison algorithms, and the similarity comparison algorithm which may be cosine similarity and is specifically adopted in this embodiment is represented asWherein A represents the characteristics of the anilox face image (represented by a vector form), and B represents the characteristics of the face image to be compared. Similarity comparison algorithm using cosine similarityThe method can embody the similarity degree of the facial image without reticulate patterns and the facial image to be compared with the reticulate patterns in the space distribution, and has higher accuracy.
Further, in step S20, before extracting the reticulate position information in the reticulate facial image by using the reticulate extraction model, the method further includes:
s21: and obtaining a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern face training image of the same person and a corresponding non-reticulate pattern face training image.
In an embodiment, 10000 or 5000 pairs of reticulate training samples can be used as reticulate training sample sets, and each reticulate training sample is a reticulate face training image of the same person and a corresponding non-reticulate face training image, and the difference between the two images is only whether reticulate is present or not. The size of each image is consistent, the shape of the reticulate pattern is not required, and the reticulate pattern can be any reticulate pattern shape.
S22: and reading pixel values in the textured face training image and the non-textured face training image, and normalizing the pixel values to be in the [0,1] interval.
It will be appreciated that a sub-image comprises a plurality of pixels, and that different pixels may also comprise different pixel values, e.g. 2 8 、2 12 And 2 16 The number of pixel values is large, so that the operation of normalizing the pixel values can be adopted when the calculation is actually performed, each pixel value is compressed in the same range interval, the calculation process is simplified, and the face verification efficiency is improved.
In one embodiment, the computer device may directly read the pixel values of the pixels in the texture training sample.
S23: and (3) adopting the normalized pixel value of the face training image with the reticulate pattern, correspondingly subtracting the normalized pixel value of the face training image without the reticulate pattern based on the pixel distribution position, taking the absolute value of the difference value to obtain a pixel difference value, taking the part of the pixel difference value which is smaller than a preset critical value as 0 and taking the part of the pixel difference value which is not smaller than the preset critical value as 1, and obtaining the reticulate pattern position information of the label.
It will be appreciated that an image is made up of pixels, each having its distributed location on the image. For an image of equal size, the corresponding pixel value subtraction represents the pixel value subtraction for the same distribution position of pixels in the respective image.
In one embodiment, the predetermined threshold may be specifically set to 0.25. Specifically, the pixel difference value corresponding to each pixel distribution position is less than 0.25, and at the moment, the difference of the pixel values at the pixel distribution positions corresponding to the face training image with reticulate patterns and the face training image without reticulate patterns can be considered to be not large, and reticulate patterns can be considered to be absent at the pixel distribution positions; in contrast, the pixel difference value corresponding to each pixel distribution position is not less than 0.25 and is taken as 1, at this time, the pixel values at the pixel distribution positions corresponding to the face training image with reticulate pattern and the face training image without reticulate pattern can be considered to have a large difference, and reticulate pattern can be considered to exist at the pixel distribution positions, so that the corresponding pixel distribution position with the pixel difference value taken as 1 can be determined to have reticulate pattern, and label reticulate pattern position information can be obtained. The label reticulate pattern position information accords with objective facts, and can be used for training a deep neural network model to obtain a reticulate pattern extraction model.
S24: according to the output of the depth neural network model and label reticulate pattern position information which are acquired in advance, calculating the loss generated in the training process of the depth neural network model through a loss function, updating network parameters of the depth neural network model through the loss, and obtaining a reticulate pattern extraction model, wherein the loss function is expressed as Wherein n represents the total number of pixels, x i An ith pixel value, y, representing the output of the deep neural network model i Representing the ith pixel value at the label reticulate location.
The deep neural network model can be a model which utilizes migration learning to obtain initial network parameters in advance, and has preliminary reticulate pattern extraction capability. The network parameters of the deep neural network model with the preliminary reticulate pattern extraction capability are updated, so that the network training speed can be improved, and the reticulate pattern extraction model can be obtained more quickly.
In one embodiment, the loss generated in the training process of the model is calculated according to the output of the deep neural network model and the label reticulate pattern position information, so that the reticulate pattern extraction model is obtained by reversely updating the parameters of the deep neural network model according to the loss.
It will be appreciated that the moire extraction model enables the extraction of a moire image of arbitrary moire shape, since the model training is based on pixel differences, irrespective of the shape of the moire position, only concerning the differences in pixel value level. The user does not have to train a plurality of corresponding texture extraction models separately according to different texture shapes.
In steps S21-S24, a specific embodiment of a training texture extraction model is provided. The reticulate pattern extraction model is obtained based on pixel difference training, can train reticulate pattern training samples with different reticulate pattern shapes, and ensures the extraction precision of the reticulate pattern extraction model.
Further, the generating type countermeasure network includes a generator and a discriminator, before step S30, that is, before inputting the reticulate position information and the reticulate face image into the reticulate pattern to obtain the reticulate face image, the generating type countermeasure network further includes:
s311: and obtaining a reticulate training sample and a non-reticulate training sample with the same sample number.
The sample number is set to be 1:1, so that the situation that the model is excessively fitted in the training of the anilox model can be prevented, and the generalization capability of the anilox model can be effectively improved.
S312: and extracting reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
It should be noted that the texture position information of the textured training sample may be different for different textured training samples (e.g., textured face images of two different people), i.e., the texture type of the textured training sample may be different.
It can be appreciated that, since the implementation of the descreening function is determined based on the difference value of the pixel points in the implementation, for any anilox training sample with anilox, the feature of learning the difference value of the pixel points in the training process is not affected by the geometric distribution of the pixel points in the training sample with anilox.
S313: inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the reticulate pattern training sample.
The network parameters of the generator are updated based on the first generation loss value. Wherein the first generated loss value may be determined according to a user-predefined loss function.
It will be appreciated that in continuous generator training, the final generator will be able to learn how to remove deep features of the texture, and may output simulated face images from which the texture of the texture training sample is removed, based on the input texture position information and the texture training sample.
S314: the method comprises the steps of inputting an imitated human face image and a training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by a generator in a training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process.
The discriminator is a discriminating model for checking the simulated face image output by the generator. According to the network characteristics of the generated countermeasure network, the discriminator can effectively update the network parameters of the generator, so that the simulated face image output by the generator is more similar to the comparison sample.
Similarly, the first discrimination loss value and the second discrimination loss value can be obtained according to the obtained discrimination result and a preset label (comparison result), and the loss value generated in the discrimination process can be determined according to a loss function predefined by a user.
The first discrimination loss value is a loss value caused by the generator in the training process, and the second discrimination loss value is a loss value caused by the discriminator itself in the training process. It will be appreciated that, since the simulated face image is generated by the generator, there is a portion of the generated loss value that is the first discrimination loss value that is generated by the generator during the training process.
S315: and arithmetically adding the first generated loss value and the first discrimination loss value to obtain a second generated loss value, and updating the network parameters of the generator by adopting the second generated loss value.
It will be appreciated that the descreening accuracy of the descreening model generated can be improved by adding the loss values associated with the generator and updating the network parameters of the generator together during the training process of the descreening model.
S316: and updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
In steps S311-S316, a specific implementation manner of obtaining a descreening model through training is provided, where network parameters can be updated according to a first generated loss value, a first discrimination loss value and a second discrimination loss value generated in the training process, so as to obtain the descreening model with higher accuracy.
Further, before step S30, that is, before inputting the reticulate position information and the reticulate facial image into the descreening model, the method further includes:
s321: and obtaining a reticulate training sample and a non-reticulate training sample with the same sample number.
S322: and extracting reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
S323: inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the reticulate pattern training sample.
S324: the method comprises the steps of inputting an imitated human face image and a training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss of a generator in the training process, and the second discrimination loss value is the loss of the discriminator in the training process.
The step flow of steps S321 to S324 is the same as steps S311 to S314, and in this embodiment, the difference is that a simulation loss value is added in steps S325 to S328, so that the descreening precision of the descreened model obtained by training is further improved.
S325: inputting the simulated face image into a feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features without the reticulate pattern training sample extracted by the extracted feature extraction model.
Specifically, the loss value generated in the training process of the generator and the loss value generated in the judging process of the judging device can be obtained by taking the generated simulated face image into consideration through the feature extraction model, so that the simulation loss value is added, the updating of network parameters in the training process of the anilox removing model can be facilitated, and the accuracy of the anilox removing model is improved.
S326: and carrying out arithmetic addition on the simulation loss value, the first generation loss value and the first discrimination loss value to obtain a third generation loss value.
S327: and arithmetically adding the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value.
S328: and updating the network parameters of the generator according to the third generation loss value.
S329: updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
In steps S321-S329, a further specific embodiment of training to obtain a descreening model is provided, where the embodiment adds a feature extraction model to take into account the generated simulated face image, and may also obtain a loss value generated by the generator during training and a loss value generated during the discriminating process of the discriminator, so as to further optimize the descreening model.
Further, before step S40, that is, before inputting the descreened face image into the feature extraction model, obtaining the descreened face image features, the method further includes:
s41: initializing a convolutional neural network, wherein the weight initialized by the convolutional neural network meets the following conditionsWherein n is l Representing the number of samples input at the first layer of the convolutional neural network, S () represents the variance operation, W l Weight representing layer I of convolutional neural network, < ->Representing arbitrary, l represents the first layer of the convolutional neural network.
The initialization operation of the convolutional neural network can accelerate the training speed of the feature extraction model and improve the accuracy of feature extraction of the feature extraction model.
It will be appreciated that the initialization of the weights of the convolutional neural network can affect the training of the feature extraction model, in one embodiment, when the weights of the convolutional neural network initialization satisfyIn this case, the training effect can be remarkably improved.
S42: and obtaining a training sample without reticulation.
S43: and inputting the training sample without reticulate patterns into the initialized convolutional neural network to obtain a loss value generated in the training process.
Specifically, the loss value generated in the training process can be obtained according to the process of training the convolutional neural network without the reticulate pattern training sample
S44: and updating the network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
Specifically, the method for updating the network parameter in this embodiment may be a back propagation algorithm.
In steps S41-S44, a specific embodiment of training a feature extraction model is provided, and the weights of the convolutional neural network are initialized so that the initialized weights satisfyThe training speed of the feature extraction model can be increased, and the accuracy of feature extraction of the feature extraction model is improved.
In the embodiment of the invention, firstly, a textured face image and a non-textured face image to be compared are obtained, a textured extraction model is adopted to extract textured position information in the textured face image, the textured position information and the textured face image are input into a textured removal model to obtain a textured removal face image, and the texture of the textured face image can be accurately removed by utilizing the extracted textured position information according to the simulation function of a generated type countermeasure network, so that the textured removal face image is generated; then, respectively extracting the characteristics of the anilox-removed face image and the face image characteristics of the face image to be compared without anilox by adopting a characteristic extraction model; and finally, confirming a face verification result by calculating the similarity between the characteristics of the anilox-removed face image and the characteristics of the face image to be compared. According to the embodiment of the invention, the reticulate patterns of the facial image with the reticulate patterns can be accurately removed, and the accuracy of facial verification is effectively improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Based on the face verification method provided in the embodiment, the embodiment of the invention further provides a device embodiment for realizing the steps and the method in the method embodiment.
Fig. 2 shows a schematic block diagram of a face verification apparatus in one-to-one correspondence with a face verification method in the embodiment. As shown in fig. 2, the face verification device includes an image to be compared acquisition module 10, a reticulate location information extraction module 20, a descreened face image acquisition module 30, a descreened face image feature acquisition module 40, a face image feature to be compared acquisition module 50, and a verification module 60. The implementation functions of the image obtaining module to be compared 10, the reticulate pattern position information extracting module 20, the reticulate pattern face image obtaining module 30, the reticulate pattern face image feature obtaining module 40, the human face image feature obtaining module to be compared 50 and the verifying module 60 correspond to the steps corresponding to the face verifying method in the embodiment one by one, and in order to avoid redundancy, the embodiment is not described in detail one by one.
The image to be compared acquisition module 10 is used for acquiring the facial image to be compared with the reticulate pattern and the facial image to be compared without reticulate pattern.
The reticulate pattern position information extraction module 20 is configured to extract reticulate pattern position information in the facial image with the reticulate pattern by using a reticulate pattern extraction model, where the reticulate pattern extraction model is obtained based on pixel difference training.
The anilox face image acquisition module 30 is configured to input the anilox position information and the anilox face image into a anilox model, so as to obtain the anilox face image, where the anilox model is obtained by adopting a generated type countermeasure network training.
The descreened face image feature acquisition module 40 is configured to input the descreened face image into a feature extraction model to obtain the descreened face image feature, where the feature extraction model is obtained by training with a convolutional neural network.
The to-be-compared face image feature obtaining module 50 is configured to input the to-be-compared face image without the reticulate pattern into the feature extraction model to obtain the to-be-compared face image feature.
The verification module 60 is configured to calculate a similarity between the feature of the anilox face image and the feature of the face image to be compared, and if the similarity is greater than a preset threshold, the face verification is successful.
Optionally, the face verification device is further specifically configured to:
acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern face training image of the same person and a corresponding non-reticulate pattern face training image;
reading pixel values in the face training image with the reticulate pattern and the face training image without the reticulate pattern, and normalizing the pixel values to be within a [0,1] interval;
adopting a normalized pixel value of the face training image with reticulate patterns, correspondingly subtracting the normalized pixel value of the face training image without reticulate patterns based on the pixel distribution position, taking an absolute value of the difference value to obtain a pixel difference value, taking a part of the pixel difference value which is smaller than a preset critical value as 0 and taking a part of the pixel difference value which is not smaller than the preset critical value as 1 to obtain tag reticulate pattern position information;
according to the output of the depth neural network model and label reticulate pattern position information which are acquired in advance, calculating the loss generated in the training process of the depth neural network model through a loss function, updating network parameters of the depth neural network model through the loss, and obtaining a reticulate pattern extraction model, wherein the loss function is expressed as Wherein n represents the total number of pixels, x i An ith pixel value, y, representing the output of the deep neural network model i Representing the ith pixel value at the label reticulate location.
Optionally, the face verification device is further specifically configured to:
and obtaining a reticulate training sample and a non-reticulate training sample with the same sample number.
And extracting reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
Inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the reticulate pattern training sample.
The method comprises the steps of inputting an imitated human face image and a training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by a generator in a training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process.
And arithmetically adding the first generated loss value and the first discrimination loss value to obtain a second generated loss value, and updating the network parameters of the generator by adopting the second generated loss value.
And updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
Optionally, the face verification device is further specifically configured to:
and obtaining a reticulate training sample and a non-reticulate training sample with the same sample number.
And extracting reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
Inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the reticulate pattern training sample.
The method comprises the steps of inputting an imitated human face image and a training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss of a generator in the training process, and the second discrimination loss value is the loss of the discriminator in the training process.
Inputting the simulated face image into a feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features without the reticulate pattern training sample extracted by the extracted feature extraction model.
And carrying out arithmetic addition on the simulation loss value, the first generation loss value and the first discrimination loss value to obtain a third generation loss value.
Carrying out arithmetic addition on the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
and updating the network parameters of the generator according to the third generation loss value.
Updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
Optionally, the face verification device is further specifically configured to:
initializing a convolutional neural network, wherein the weight initialized by the convolutional neural network meets the following conditions Wherein n is l Representing the number of samples input at the first layer of the convolutional neural network, S () represents the variance operation, W l Weight representing layer I of convolutional neural network, < ->Representing arbitrary, l represents the first layer of the convolutional neural network.
And obtaining a training sample without reticulation.
And inputting the training sample without reticulate patterns into the initialized convolutional neural network to obtain a loss value generated in the training process.
And updating the network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
In the embodiment of the invention, firstly, a textured face image and a non-textured face image to be compared are obtained, a textured extraction model is adopted to extract textured position information in the textured face image, the textured position information and the textured face image are input into a textured removal model to obtain a textured removal face image, and the texture of the textured face image can be accurately removed by utilizing the extracted textured position information according to the simulation function of a generated type countermeasure network, so that the textured removal face image is generated; then, respectively extracting the characteristics of the anilox-removed face image and the face image characteristics of the face image to be compared without anilox by adopting a characteristic extraction model; and finally, confirming a face verification result by calculating the similarity between the characteristics of the anilox-removed face image and the characteristics of the face image to be compared. According to the embodiment of the invention, the reticulate patterns of the facial image with the reticulate patterns can be accurately removed, and the accuracy of facial verification is effectively improved.
The present embodiment provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the face verification method in the embodiment, and in order to avoid repetition, details are not described herein. Alternatively, the computer program when executed by the processor implements the functions of each module/unit in the face verification apparatus in the embodiment, and in order to avoid repetition, details are not described herein.
Fig. 3 is a schematic diagram of a computer device according to an embodiment of the present invention. As shown in fig. 3, the computer device 70 of this embodiment includes: the processor 71, the memory 72, and the computer program 73 stored in the memory 72 and executable on the processor 71, the computer program 73 when executed by the processor 71 implements the face verification method in the embodiment, and is not described herein in detail to avoid repetition. Alternatively, the computer program 73, when executed by the processor 71, performs the functions of each model/unit in the face verification apparatus in the embodiment, and is not described herein in detail for avoiding repetition.
The computer device 70 may be a desktop computer, a notebook computer, a palm top computer, a cloud server, or the like. Computer device 70 may include, but is not limited to, a processor 71, a memory 72. It will be appreciated by those skilled in the art that fig. 3 is merely an example of a computer device 70 and is not intended to limit the computer device 70, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a computer device may also include an input-output device, a network access device, a bus, etc.
The processor 71 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 72 may be an internal storage unit of the computer device 70, such as a hard disk or memory of the computer device 70. The memory 72 may also be an external storage device of the computer device 70, such as a plug-in hard disk provided on the computer device 70, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the memory 72 may also include both internal storage units and external storage devices of the computer device 70. The memory 72 is used to store computer programs and other programs and data required by the computer device. The memory 72 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (9)
1. A face verification method, the method comprising:
acquiring a facial image with reticulate patterns and a facial image to be compared without reticulate patterns;
extracting reticulate pattern position information in the reticulate pattern face image by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference training;
Inputting the reticulate pattern position information and the reticulate pattern face image into a reticulate pattern removing model to obtain a reticulate pattern removing face image, wherein the reticulate pattern removing model is obtained by adopting a generating type countermeasure network training;
inputting the anilox-removed face image into a feature extraction model to obtain anilox-removed face image features, wherein the feature extraction model is obtained by adopting convolutional neural network training;
inputting the facial image to be compared without reticulate patterns into the feature extraction model to obtain the features of the facial image to be compared;
calculating the similarity between the anilox removed face image features and the face image features to be compared, and if the similarity is larger than a preset threshold, successful face verification is achieved;
inputting the anilox-removed face image into a feature extraction model to obtain anilox-removed face image features, wherein the method further comprises the steps of:
initializing a convolutional neural network, wherein the weight initialized by the convolutional neural network meets the following conditionsWherein n is l Representing the number of samples input at the first layer of the convolutional neural network, S () representing the variance operation, W l Weight representing layer I of the convolutional neural network,/->Representing arbitrary, l representing a first layer of the convolutional neural network;
Acquiring a training sample without reticulate patterns;
inputting the training sample without reticulate patterns into the initialized convolutional neural network to obtain a loss value generated in the training process;
and updating network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
2. The method of claim 1, wherein prior to the extracting texture position information in the textured face image using the texture extraction model, the method further comprises:
acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern face training image of the same person and a corresponding non-reticulate pattern face training image;
reading pixel values in the anilox face training image and the non-anilox face training image, and normalizing the pixel values to be within a [0,1] interval;
adopting the normalized pixel value of the face training image with reticulate patterns, correspondingly subtracting the normalized pixel value of the face training image without reticulate patterns based on the pixel distribution position, taking the absolute value of the difference value as a pixel difference value, taking the part of the pixel difference value smaller than a preset critical value as 0 and taking the part not smaller than the preset critical value as 1 to obtain tag reticulate pattern position information;
Calculating the loss generated in the training process of the deep neural network model through a loss function according to the output of the pre-acquired deep neural network model and the tag reticulate pattern position information, and updating network parameters of the deep neural network model through the loss to obtain the reticulate pattern extraction model, wherein the loss function is expressed asWherein n represents the total number of pixels, x i An ith pixel value, y, representing the output of the deep neural network model i Representing the ith pixel value at the label screen location.
3. The method of claim 1, wherein the generated countermeasure network includes a generator and a arbiter, the method further comprising, prior to said inputting the texture location information and the textured face image into a descreening model, obtaining a descreened face image:
obtaining a training sample with reticulate patterns and a training sample without reticulate patterns, wherein the number of the training samples with reticulate patterns is equal to that of the training samples without reticulate patterns;
extracting reticulate pattern position information of the reticulate pattern training sample by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the training sample without reticulate pattern;
Inputting the simulated face image and the training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by the generator in the training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process;
carrying out arithmetic addition on the first generated loss value and the first discrimination loss value to obtain a second generated loss value, and updating the network parameters of the generator by adopting the second generated loss value;
and updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
4. The method of claim 1, wherein the generated countermeasure network includes a generator and a arbiter, the method further comprising, prior to said inputting the texture location information and the textured face image into a descreening model, obtaining a descreened face image:
Obtaining a training sample with reticulate patterns and a training sample without reticulate patterns, wherein the number of the training samples with reticulate patterns is equal to that of the training samples without reticulate patterns;
extracting reticulate pattern position information of the reticulate pattern training sample by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the training sample without reticulate pattern;
inputting the simulated face image and the training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss caused by the generator in the training process, and the second discrimination loss value is the loss caused by the discriminator in the training process;
inputting the simulated face image into the feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features without the reticulate pattern training sample extracted by the feature extraction model;
Carrying out arithmetic addition on the simulation loss value, the first generated loss value and the first discrimination loss value to obtain a third generated loss value;
carrying out arithmetic addition on the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
updating network parameters of the generator according to the third generation loss value;
updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
5. A face verification apparatus, the apparatus comprising:
the image to be compared acquisition module is used for acquiring the facial image to be compared with the anilox and the facial image without the anilox;
the reticulate pattern position information extraction module is used for extracting reticulate pattern position information in the facial image with reticulate patterns by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference training;
the anilox-removing face image acquisition module is used for inputting the anilox position information and the anilox-removing face image into an anilox-removing model to obtain an anilox-removing face image, wherein the anilox-removing model is obtained by adopting a generated type countermeasure network training;
The characteristic extraction module is used for inputting the anilox-removed face image into a characteristic extraction model to obtain the characteristic of the anilox-removed face image, wherein the characteristic extraction model is obtained by adopting convolutional neural network training;
the to-be-compared face image feature acquisition module is used for inputting the to-be-compared face image without reticulate patterns into the feature extraction model to obtain to-be-compared face image features;
the verification module is used for calculating the similarity between the anilox-removed face image characteristics and the face image characteristics to be compared, and when the similarity is larger than a preset threshold, the face verification is successful;
the anilox-removed face image is input into a feature extraction model to obtain anilox-removed face image features, and the method further comprises the following steps:
initializing a convolutional neural network, wherein the weight initialized by the convolutional neural network meets the following conditionsWherein n is l S () table indicating the number of samples input at the first layer of the convolutional neural networkVariance calculation, W l Weight representing layer I of the convolutional neural network,/->Representing arbitrary, l representing a first layer of the convolutional neural network;
acquiring a training sample without reticulate patterns;
Inputting the training sample without reticulate patterns into the initialized convolutional neural network to obtain a loss value generated in the training process;
and updating network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
6. The device according to claim 5, characterized in that it is also specifically adapted to:
acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern face training image of the same person and a corresponding non-reticulate pattern face training image;
reading pixel values in the anilox face training image and the non-anilox face training image, and normalizing the pixel values to be within a [0,1] interval;
adopting the normalized pixel value of the face training image with reticulate patterns, correspondingly subtracting the normalized pixel value of the face training image without reticulate patterns based on the pixel distribution position, taking the absolute value of the difference value as a pixel difference value, taking the part of the pixel difference value smaller than a preset critical value as 0 and taking the part not smaller than the preset critical value as 1 to obtain tag reticulate pattern position information;
calculating the loss generated in the training process of the deep neural network model through a loss function according to the output of the pre-acquired deep neural network model and the tag reticulate pattern position information, and updating network parameters of the deep neural network model through the loss to obtain the reticulate pattern extraction model, wherein the loss function is expressed as Wherein n represents the total number of pixels, x i An ith pixel value, y, representing the output of the deep neural network model i Representing the ith pixel value at the label screen location.
7. The device according to claim 5, characterized in that it is also specifically adapted to:
obtaining a training sample with reticulate patterns and a training sample without reticulate patterns, wherein the number of the training samples with reticulate patterns is equal to that of the training samples without reticulate patterns;
extracting reticulate pattern position information of the reticulate pattern training sample by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network, generating an imitated human face image, and obtaining a first generation loss value according to the imitated human face image and the training sample without reticulate pattern;
inputting the simulated face image and the training sample without reticulate patterns into a discriminator of a generated type countermeasure network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss caused by the generator in the training process, and the second discrimination loss value is the loss caused by the discriminator in the training process;
Inputting the simulated face image into the feature extraction model to obtain a simulation loss value;
carrying out arithmetic addition on the simulation loss value, the first generated loss value and the first discrimination loss value to obtain a third generated loss value;
carrying out arithmetic addition on the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
updating network parameters of the generator according to the third generation loss value;
updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
8. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the face verification method of any one of claims 1 to 4 when the computer program is executed.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the face verification method according to any one of claims 1 to 4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910827470.8A CN110765843B (en) | 2019-09-03 | 2019-09-03 | Face verification method, device, computer equipment and storage medium |
PCT/CN2019/117774 WO2021042544A1 (en) | 2019-09-03 | 2019-11-13 | Facial verification method and apparatus based on mesh removal model, and computer device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910827470.8A CN110765843B (en) | 2019-09-03 | 2019-09-03 | Face verification method, device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110765843A CN110765843A (en) | 2020-02-07 |
CN110765843B true CN110765843B (en) | 2023-09-22 |
Family
ID=69330204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910827470.8A Active CN110765843B (en) | 2019-09-03 | 2019-09-03 | Face verification method, device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110765843B (en) |
WO (1) | WO2021042544A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111612799A (en) * | 2020-05-15 | 2020-09-01 | 中南大学 | Face data pair-oriented incomplete reticulate pattern face repairing method and system and storage medium |
CN113486688A (en) * | 2020-05-27 | 2021-10-08 | 海信集团有限公司 | Face recognition method and intelligent device |
CN113095219A (en) * | 2021-04-12 | 2021-07-09 | 中国工商银行股份有限公司 | Reticulate pattern face recognition method and device |
CN113469898B (en) * | 2021-06-02 | 2024-07-19 | 北京邮电大学 | Image de-distortion method based on deep learning and related equipment |
CN114299590A (en) * | 2021-12-31 | 2022-04-08 | 中国科学技术大学 | Training method of face completion model, face completion method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930797A (en) * | 2016-04-21 | 2016-09-07 | 腾讯科技(深圳)有限公司 | Face verification method and device |
CN108734673A (en) * | 2018-04-20 | 2018-11-02 | 平安科技(深圳)有限公司 | Descreening systematic training method, descreening method, apparatus, equipment and medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3115898C (en) * | 2017-10-11 | 2023-09-26 | Aquifi, Inc. | Systems and methods for object identification |
CN108416343B (en) * | 2018-06-14 | 2020-12-22 | 北京远鉴信息技术有限公司 | Face image recognition method and device |
CN109871755A (en) * | 2019-01-09 | 2019-06-11 | 中国平安人寿保险股份有限公司 | A kind of auth method based on recognition of face |
CN110032931B (en) * | 2019-03-01 | 2023-06-13 | 创新先进技术有限公司 | Method and device for generating countermeasure network training and removing reticulation and electronic equipment |
-
2019
- 2019-09-03 CN CN201910827470.8A patent/CN110765843B/en active Active
- 2019-11-13 WO PCT/CN2019/117774 patent/WO2021042544A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930797A (en) * | 2016-04-21 | 2016-09-07 | 腾讯科技(深圳)有限公司 | Face verification method and device |
CN108734673A (en) * | 2018-04-20 | 2018-11-02 | 平安科技(深圳)有限公司 | Descreening systematic training method, descreening method, apparatus, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2021042544A1 (en) | 2021-03-11 |
CN110765843A (en) | 2020-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765843B (en) | Face verification method, device, computer equipment and storage medium | |
CN110020592B (en) | Object detection model training method, device, computer equipment and storage medium | |
CN107169454B (en) | Face image age estimation method and device and terminal equipment thereof | |
CN108875534B (en) | Face recognition method, device, system and computer storage medium | |
JP6812086B2 (en) | Training method for reticulated pattern removal system, reticulated pattern removal method, equipment, equipment and media | |
CN109829396B (en) | Face recognition motion blur processing method, device, equipment and storage medium | |
CN114331829A (en) | Countermeasure sample generation method, device, equipment and readable storage medium | |
CN107944381B (en) | Face tracking method, face tracking device, terminal and storage medium | |
CN110046622B (en) | Targeted attack sample generation method, device, equipment and storage medium | |
CN113887408B (en) | Method, device, equipment and storage medium for detecting activated face video | |
CN113298152B (en) | Model training method, device, terminal equipment and computer readable storage medium | |
CN111639667B (en) | Image recognition method, device, electronic equipment and computer readable storage medium | |
CN111680544B (en) | Face recognition method, device, system, equipment and medium | |
CN110648289A (en) | Image denoising processing method and device | |
CN113221601B (en) | Character recognition method, device and computer readable storage medium | |
CN111353325A (en) | Key point detection model training method and device | |
CN112001285A (en) | Method, device, terminal and medium for processing beautifying image | |
CN112509154A (en) | Training method of image generation model, image generation method and device | |
CN110210425B (en) | Face recognition method and device, electronic equipment and storage medium | |
CN111931148A (en) | Image processing method and device and electronic equipment | |
CN116580208A (en) | Image processing method, image model training method, device, medium and equipment | |
CN113705459B (en) | Face snapshot method and device, electronic equipment and storage medium | |
CN112348069B (en) | Data enhancement method, device, computer readable storage medium and terminal equipment | |
CN115410257A (en) | Image protection method and related equipment | |
CN115410281A (en) | Electronic signature identification method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |