CN110765843A - Face verification method and device, computer equipment and storage medium - Google Patents

Face verification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110765843A
CN110765843A CN201910827470.8A CN201910827470A CN110765843A CN 110765843 A CN110765843 A CN 110765843A CN 201910827470 A CN201910827470 A CN 201910827470A CN 110765843 A CN110765843 A CN 110765843A
Authority
CN
China
Prior art keywords
reticulate pattern
face image
loss value
training
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910827470.8A
Other languages
Chinese (zh)
Other versions
CN110765843B (en
Inventor
罗霄
胡文成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910827470.8A priority Critical patent/CN110765843B/en
Priority to PCT/CN2019/117774 priority patent/WO2021042544A1/en
Publication of CN110765843A publication Critical patent/CN110765843A/en
Application granted granted Critical
Publication of CN110765843B publication Critical patent/CN110765843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face verification method, a face verification device, computer equipment and a storage medium, and relates to the technical field of artificial intelligence. The face verification method comprises the following steps: acquiring a human face image with reticulate patterns and a human face image to be compared without the reticulate patterns; extracting reticulate pattern position information in the human face image with the reticulate pattern; inputting the reticulate pattern position information and the reticulate pattern-carrying face image into a reticulate pattern removing model to obtain a reticulate pattern removing face image; inputting the descreened face image into a feature extraction model to obtain descreened face image features; inputting the face image to be compared without the reticulate pattern into a feature extraction model to obtain the features of the face image to be compared; and calculating the similarity between the descreened face image characteristics and the face image characteristics to be compared, and when the similarity is greater than a preset threshold value, successfully verifying the face. By adopting the face verification method, the reticulate patterns in the face image with the reticulate patterns can be removed, and the face verification accuracy is effectively improved.

Description

Face verification method and device, computer equipment and storage medium
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of artificial intelligence, in particular to a face verification method, a face verification device, computer equipment and a storage medium.
[ background of the invention ]
With the rapid development of the internet technology, it is very important to ensure the security of the user account through face verification. The face verification is a branch of the face recognition field, and two face photos can be verified fully automatically by using a face verification algorithm to judge whether the photos are the same person. The method can be used for user face identity verification in a plurality of scenes such as Internet finance and the like.
At present, in order to protect the privacy security of citizens, a textured watermark is added when a photo is transmitted outside, so that textured photos, such as identity card face photos, social security card face photos, pass card face photos and other certificate photos, can be obtained. When the face verification is performed on the textured picture, a professional is required to remove the textured picture from the textured picture by using a de-noising algorithm, then the picture without the textured picture is repaired, and the textured picture can be verified finally.
Because the existing image processing mode for removing the reticulate patterns has higher requirements on the professional technology of operators, the accuracy of face verification is not high.
[ summary of the invention ]
In view of this, embodiments of the present invention provide a face verification method, an apparatus, a computer device, and a storage medium, so as to solve the problem that the accuracy of the existing image with a cross-hatched pattern is not high when performing face verification.
In a first aspect, an embodiment of the present invention provides a face verification method, including:
acquiring a human face image with reticulate patterns and a human face image to be compared without the reticulate patterns;
extracting reticulate pattern position information in the human face image with the reticulate pattern by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference value training;
inputting the reticulate pattern position information and the reticulate pattern-carrying face image into a reticulate pattern removing model to obtain a reticulate pattern removing face image, wherein the reticulate pattern removing model is obtained by adopting generative confrontation network training;
inputting the descreened face image into a feature extraction model to obtain descreened face image features, wherein the feature extraction model is obtained by adopting convolutional neural network training;
inputting the human face image to be compared without the reticulate pattern into the feature extraction model to obtain the features of the human face image to be compared;
and calculating the similarity between the descreened face image features and the face image features to be compared, and when the similarity is greater than a preset threshold value, successfully verifying the face.
The above-described aspect and any possible implementation manner further provide an implementation manner, before the extracting, by using the mesh extraction model, mesh position information in the textured face image, the method further includes:
acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern human face training image of the same person and a corresponding human face training image without reticulate patterns;
reading pixel values in the face training image with the reticulate pattern and the face training image without the reticulate pattern, and normalizing the pixel values to be in a [0,1] interval;
adopting the normalized pixel value of the face training image with the reticulate pattern, correspondingly subtracting the normalized pixel value of the face training image without the reticulate pattern based on the pixel distribution position, taking the absolute value of the difference value to obtain a pixel difference value, taking the part of the pixel difference value which is smaller than a preset critical value as 0, and taking the part of the pixel difference value which is not smaller than the preset critical value as 1, and obtaining label reticulate pattern position information;
calculating the loss generated by the deep neural network model in the training process through a loss function according to the output of the deep neural network model and the label mesh position information acquired in advance, updating the network parameters of the deep neural network model by using the loss to obtain the mesh extraction model, wherein the loss function is expressed as
Figure BDA0002189563430000031
Where n denotes the total number of pixels, xiIth pixel value representing output of deep neural network model,yiRepresenting the ith pixel value at the label screen location.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the generative confrontation network includes a generator and a discriminator, and before the inputting the mesh position information and the textured face image into the descreening model to obtain the descreened face image, the method further includes:
acquiring a meshed training sample and a non-meshed training sample with equal sample number;
extracting the reticulate pattern position information of the training sample with the reticulate pattern by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the training sample without the reticulate pattern;
inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of a generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by the discriminator in the training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process;
arithmetically adding the first generation loss value and the first judgment loss value to obtain a second generation loss value, and updating the network parameters of the generator by adopting the second generation loss value;
and updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the generative confrontation network includes a generator and a discriminator, and before the inputting the mesh position information and the textured face image into the descreening model to obtain the descreened face image, the method further includes:
acquiring a meshed training sample and a non-meshed training sample with equal sample number;
extracting the reticulate pattern position information of the training sample with the reticulate pattern by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the training sample without the reticulate pattern;
inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of a generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss of the discriminator in the training process, and the second discrimination loss value is the loss of the discriminator in the training process;
inputting the simulated face image into the feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features of the training sample without the reticulate pattern extracted by the feature extraction model;
arithmetically adding the simulation loss value, the first generation loss value and the first discrimination loss value to obtain a third generation loss value;
arithmetically adding the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
updating a network parameter of the generator according to the third generation loss value;
and updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
The above-described aspect and any possible implementation manner further provide an implementation manner, where inputting the descreened face image into a feature extraction model to obtain descreened face image features, the method further includes:
initializing a convolutional neural network, wherein the initialized weights of the convolutional neural network satisfy
Figure BDA0002189563430000041
Wherein n islRepresents the number of samples input at the l-th layer of the convolutional neural network, S () represents a variance operation, WlRepresenting weights of the l-th layer of the convolutional neural network,representing arbitrary,/, the l-th layer of the convolutional neural network;
acquiring a training sample without reticulate patterns;
inputting the training sample without the reticulate pattern into the initialized convolutional neural network to obtain a loss value generated in the training process;
and updating the network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
In a second aspect, an embodiment of the present invention provides a face verification apparatus, including:
the comparison image acquisition module is used for acquiring a face image with reticulate patterns and a face image to be compared without the reticulate patterns;
the reticulate pattern position information extraction module is used for extracting reticulate pattern position information in the human face image with the reticulate pattern by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference value training;
the cob-webbing-removing face image acquisition module is used for inputting the cob-webbing position information and the cob-webbing-carrying face image into a cob-webbing-removing model to obtain a cob-webbing-removing face image, wherein the cob-webbing-removing model is obtained by adopting generative confrontation network training;
the descreening face image feature acquisition module is used for inputting the descreening face image into a feature extraction model to obtain descreening face image features, wherein the feature extraction model is obtained by adopting convolutional neural network training;
the to-be-compared face image feature acquisition module is used for inputting the to-be-compared face image without the reticulate pattern into the feature extraction model to obtain the to-be-compared face image features;
and the verification module is used for calculating the similarity between the descreened face image characteristics and the face image characteristics to be compared, and when the similarity is greater than a preset threshold value, the face verification is successful.
In a third aspect, a computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above-mentioned face verification method when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, including: a computer program which, when executed by a processor, implements the steps of the above-described face verification method.
In the embodiment of the invention, firstly, a face image with reticulate patterns and a face image to be compared without reticulate patterns are obtained, reticulate pattern position information in the face image with reticulate patterns is extracted by adopting a reticulate pattern extraction model, and the reticulate pattern position information and the face image with reticulate patterns are input into a reticulate pattern removal model to obtain a reticulate pattern removal face image, so that the reticulate patterns of the face image with reticulate patterns can be accurately removed by utilizing the extracted reticulate position information according to the simulation function of a generating type countermeasure network, and the reticulate pattern removal face image is generated; then, respectively extracting the reticulate-pattern-removed face image features of the reticulate-pattern-removed face image and the face image features of the face image to be compared without reticulate patterns by adopting a feature extraction model; and finally, confirming the result of the face verification by calculating the similarity between the descreened face image characteristics and the face image characteristics to be compared. According to the embodiment of the invention, the reticulate patterns of the face image with the reticulate patterns can be accurately removed, and the accuracy of face verification is effectively improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a face verification method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a face verification apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer device according to an embodiment of the invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely a field that describes the same of an associated object, meaning that three relationships may exist, e.g., A and/or B, may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe preset ranges, etc. in embodiments of the present invention, these preset ranges should not be limited to these terms. These terms are only used to distinguish preset ranges from each other. For example, the first preset range may also be referred to as a second preset range, and similarly, the second preset range may also be referred to as the first preset range, without departing from the scope of the embodiments of the present invention.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Fig. 1 shows a flowchart of a face authentication method in the present embodiment. The face verification method can be applied to a face verification system, and the face verification system can be used for verification in face verification of a face image with reticulate patterns. The face verification system can be applied to computer equipment, wherein the computer equipment can perform human-computer interaction with a user, and the computer equipment comprises, but is not limited to, computers, smart phones, tablets and other equipment. As shown in fig. 1, the face verification method includes the following steps:
s10: and acquiring a face image with a reticulate pattern and a face image to be compared without the reticulate pattern.
The human face image to be compared with the reticulate pattern is used for human face verification, and whether the human face images of the human face image to be compared with the human face image without the reticulate pattern come from the same human face or not is judged.
S20: and extracting the reticulate pattern position information in the human face image with the reticulate pattern by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference value training.
It can be understood that the face image to be compared with the face image without the cross-hatched pattern cannot be directly compared and verified. Due to the interference of the reticulate patterns in the human face image to be reticulated, the similarity between the features of the image is calculated, so that the similarity is greatly influenced.
In an embodiment, an anilox extraction model is specifically adopted to extract anilox position information in the anilox face image so as to remove interference caused by the anilox in the anilox face image.
S30: and inputting the reticulate pattern position information and the reticulate pattern-carrying face image into a reticulate pattern removing model to obtain a reticulate pattern removing face image, wherein the reticulate pattern removing model is obtained by adopting generative confrontation network training.
Among them, the Generative Adaptive Networks (GAN) is a deep learning model, and is one of the unsupervised learning methods in complex distribution. The models are played against each other by generators and discriminators, learning to produce output that is reasonably close to what one expects. It will be appreciated that the generative confrontation network is in fact constantly updating the model that optimizes itself according to the game played between the generator and the discriminators.
In an embodiment, the reticulate pattern position information and the reticulate pattern-carrying face image are input into the reticulate pattern removing model, a simulated face image can be generated according to the reticulate pattern removing model obtained by the generative confrontation network training, the simulated face image realizes the reticulate pattern removing function according to the input reticulate pattern position, the reticulate pattern removing face image with strong image reducibility can be output according to the reticulate pattern position information and the reticulate pattern-carrying face image, and the accuracy of face verification is improved.
S40: inputting the descreened face image into a feature extraction model to obtain descreened face image features, wherein the feature extraction model is obtained by adopting convolutional neural network training.
S50: and inputting the human face image to be compared without the reticulate pattern into the feature extraction model to obtain the features of the human face image to be compared.
The method can be understood to verify whether the two images come from the same face, and can adopt a feature extraction model obtained by convolutional neural network training to extract deep features of the images, so that the accuracy of face verification can be ensured, and the efficiency of face verification can be remarkably improved.
S60: and calculating the similarity between the descreened face image characteristics and the face image characteristics to be compared, and when the similarity is greater than a preset threshold value, successfully verifying the face.
Specifically, the similarity calculation may have a plurality of similarity comparison algorithms, and the similarity comparison algorithm specifically adopted in this embodiment may be a cosine similarity comparison algorithm, which is expressed as
Figure BDA0002189563430000081
Wherein, A represents the descreened face image characteristics (represented by adopting a vector form), and B represents the face image characteristics to be compared. The similarity comparison algorithm of cosine similarity is adopted, so that the similarity of the descreened face image and the face image to be compared without the descreened face image in spatial distribution can be embodied, and the accuracy is higher.
Further, in step S20, before the step of extracting the texture position information in the textured face image by using the texture extraction model, the method further includes:
s21: acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern face training image of the same person and a corresponding non-reticulate pattern face training image.
In an embodiment, 10000 or 5000 pairs of texture training samples may be specifically used as a texture training sample set, each texture training sample is a textured face training image of the same person and a corresponding non-textured face training image, and the difference between the two images is only whether there is texture. The size of each image is consistent, the shape of the reticulate pattern is not required, and the reticulate pattern can be any reticulate pattern shape.
S22: reading pixel values in the face training image with the reticulate pattern and the face training image without the reticulate pattern, and normalizing the pixel values to be in a [0,1] interval.
It will be appreciated that an image may comprise a plurality of pixels, and that different pixels may simultaneously comprise different pixel values, such as 28、212And 216The calculation amount is large during actual calculation, and the pixel values can be compressed in the same range interval by adopting the operation of normalizing the pixel values, so that the calculation process is simplified, and the acceleration of face verification is facilitatedThe efficiency of (c).
In one embodiment, the computer device may directly read the pixel values of the pixels in the textured training sample.
S23: and correspondingly subtracting the normalized pixel value of the face training image without the reticulate pattern from the normalized pixel value of the face training image with the reticulate pattern based on the pixel distribution position, taking the absolute value of the difference value to obtain a pixel difference value, taking the part of the pixel difference value smaller than a preset critical value as 0, and taking the part of the pixel difference value not smaller than the preset critical value as 1, and obtaining the reticulate pattern position information of the label.
It will be appreciated that the image is made up of pixels, each having its distribution position on the image. For images of equal size, the corresponding pixel value subtraction means a pixel value subtraction of the same distribution position of the pixels in the respective images.
In one embodiment, the predetermined threshold may be set to 0.25. Specifically, if the pixel difference value corresponding to each pixel distribution position is less than 0.25, the pixel difference value is taken as 0, and at this time, the pixel value difference between the pixel distribution positions corresponding to the human face training image with the reticulate pattern and the human face training image without the reticulate pattern is not large, and the pixel distribution position is considered to have no reticulate pattern; conversely, the pixel difference value of each pixel distribution position not less than 0.25 will be taken as 1, at this time, it can be considered that there is a large difference between the pixel values at the pixel distribution positions corresponding to the human face training image with the texture and the human face training image without the texture, and it can be considered that the texture exists at the pixel distribution position, so that it can be determined that the texture exists at the pixel distribution position corresponding to the pixel difference value taken as 1, and the label texture position information is obtained. The label reticulate pattern position information accords with objective facts and can be used for training a deep neural network model to obtain a reticulate pattern extraction model.
S24: calculating the loss of the deep neural network model in the training process through a loss function according to the output of the deep neural network model and the label mesh position information acquired in advance, updating the network parameters of the deep neural network model by using the loss to obtain a mesh extraction model, wherein the loss function is expressed as Where n denotes the total number of pixels, xiI-th pixel value, y, representing the output of the deep neural network modeliRepresenting the ith pixel value at the cross hatch position of the label.
The deep neural network model can be a model with initial network parameters obtained in advance by using migration learning, and has preliminary reticulate pattern extraction capability. The network parameters of the deep neural network model with the preliminary reticulate pattern extraction capability are updated, so that the network training speed can be increased, and the reticulate pattern extraction model can be obtained more quickly.
In an embodiment, the loss generated by the model in the training process is calculated according to the output of the deep neural network model and the label mesh position information, so that the parameters of the deep neural network model are updated reversely according to the loss to obtain the mesh extraction model.
It is to be understood that the texture extraction model can realize the function of extracting a textured image of an arbitrary texture shape because the model training is performed based on pixel difference values, regardless of the shape of texture positions, and only the difference in the pixel value level is concerned. The user does not have to train multiple corresponding screen extraction models based on different screen shapes.
In steps S21-S24, an embodiment of training a texture extraction model is provided. The reticulate pattern extraction model is obtained based on pixel difference value training, reticulate pattern training samples with different reticulate pattern shapes can be trained, and extraction accuracy of the reticulate pattern extraction model is guaranteed.
Further, the generating confrontation network includes a generator and a discriminator, and before step S30, that is, before inputting the mesh position information and the textured face image into the descreening model to obtain the descreened face image, the generating confrontation network further includes:
s311: equal numbers of textured training samples and non-textured training samples were obtained.
The number of the set samples is 1:1, so that the phenomenon of model overfitting in the training of the descreening model can be prevented, and the generalization capability of the descreening model can be effectively improved.
S312: and extracting the reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
It should be noted that, the texture position information of the textured training sample may be different for different textured training samples (e.g., two different human textured face images), that is, the texture type of the textured training sample may be different.
It can be understood that, since the implementation of the mesh removing function is determined based on the difference value of the pixel points in the implementation, for any mesh type of the training sample with the mesh, the feature of the difference value of the pixel points is learned in the training process, and the influence of the geometric distribution of the pixel points on the training sample with the mesh is avoided.
S313: and inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of the generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the reticulate pattern-free training sample.
The generator of the generative confrontation network outputs a simulated face image in the training process, and obtains a first generative loss value according to the simulated face image and a training sample (a contrast sample) without a reticulate pattern so as to update the network parameters of the generator based on the first generative loss value. Wherein the first generated loss value may be determined according to a user-predefined loss function.
In the continuous training of the generator, the final generator can learn how to remove the deep features of the cobwebbing, and the simulated face image with the cobwebbing removed from the cobwebbing training sample can be output according to the input cobwebbing position information and the cobwebbing training sample.
S314: inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of the generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by the generator in the training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process.
The discriminator is a discrimination model for checking the simulated face image output by the generator. According to the network characteristics of the generative countermeasure network, the discriminator can effectively update the network parameters of the generator, so that the simulated face image output by the generator is closer to the comparison sample.
Similarly, a first discrimination loss value and a second discrimination loss value may be obtained from the obtained discrimination result and a preset label (comparison result), and the loss value generated by the discrimination process may be determined according to a loss function predefined by a user.
It should be noted that the first discrimination loss value is a loss value caused by the generator in the training process, and the second discrimination loss value is a loss value caused by the discriminator itself in the training process. It can be understood that, since the simulated face image is generated by the generator, a part of the generated loss value is the loss value caused by the generator in the training process, and the part of the loss value is the first discriminant loss value.
S315: and arithmetically adding the first generation loss value and the first judgment loss value to obtain a second generation loss value, and updating the network parameters of the generator by adopting the second generation loss value.
As can be appreciated, during the training process of the descreening model, the loss values related to the generator can be added together and the network parameters of the generator are updated together, so that the accuracy of descreening of the generated descreening model can be improved.
S316: and updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
In steps S311 to S316, a specific implementation of training to obtain a descreening model is provided, which is capable of updating network parameters according to a first generation loss value, a first discrimination loss value and a second discrimination loss value generated in the training process to obtain the descreening model with higher accuracy.
Further, before step S30, that is, before inputting the texture position information and the textured face image into the descreening model to obtain the descreened face image, the method further includes:
s321: equal numbers of textured training samples and non-textured training samples were obtained.
S322: and extracting the reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
S323: and inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of the generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the reticulate pattern-free training sample.
S324: inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of the generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss of the generator in the training process, and the second discrimination loss value is the loss of the discriminator in the training process.
The flow of steps S321-S324 is the same as that of steps S311-S314, and in this embodiment, the difference is that the simulation loss value is added in steps S325-S328, so as to further improve the descreening accuracy of the descreened model obtained by training.
S325: and inputting the simulated face image into the feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features of the training sample without the reticulate pattern extracted by the extracted feature extraction model.
Specifically, considering that the generated simulated face image can also obtain a loss value generated by the generator in the training process through the feature extraction model and a loss value generated in the discrimination process of the discriminator, the simulated loss value is added, which can help to update the network parameters during the training of the descreening model and improve the accuracy of the descreening model.
S326: and arithmetically adding the simulation loss value, the first generation loss value and the first discrimination loss value to obtain a third generation loss value.
S327: and arithmetically adding the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value.
S328: updating the network parameters of the generator according to the third generated loss value.
S329: and updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
In steps S321-S329, another specific embodiment of training to obtain a descreening model is provided, which adds a loss value generated by the generator in the training process and a loss value generated in the discriminator discrimination process, which are also obtained by the feature extraction model in consideration of the generated simulated human face image, so as to further optimize the descreening model.
Further, before step S40, that is, before inputting the descreened face image into the feature extraction model, obtaining descreened face image features, the method further includes:
s41: initializing a convolutional neural network, wherein the initialized weights of the convolutional neural network satisfy
Figure BDA0002189563430000141
Wherein n islRepresents the number of samples input at the l-th layer of the convolutional neural network, S () represents the variance operation, WlRepresents the weight of the l layer of the convolutional neural network,denoted arbitrary, l denotes the l-th layer of the convolutional neural network.
By adopting the initialization operation of the convolutional neural network, the training speed of the feature extraction model can be increased, and the accuracy of feature extraction of the feature extraction model can be improved.
It will be appreciated that initialization of the weights of the convolutional neural network can affect the training of the feature extraction model, at oneIn the embodiment, when the initial weight of the convolutional neural network is satisfied
Figure BDA0002189563430000143
In time, the training effect can be obviously improved.
S42: non-textured training samples were obtained.
S43: and inputting the training sample without the reticulate pattern into the initialized convolutional neural network to obtain a loss value generated in the training process.
Specifically, the loss value generated in the training process can be obtained according to the process of training the convolutional neural network by using the training sample without the reticulate pattern
S44: and updating network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
Specifically, the method for updating the network parameters in this embodiment may specifically be a back propagation algorithm.
In steps S41-S44, a specific embodiment of training the feature extraction model is provided, and the weights of the convolutional neural network are initialized so that the initialized weights satisfy
Figure BDA0002189563430000151
The training speed of the feature extraction model can be accelerated, and the accuracy of feature extraction of the feature extraction model is improved.
In the embodiment of the invention, firstly, a face image with reticulate patterns and a face image to be compared without the reticulate patterns are obtained, reticulate pattern position information in the face image with the reticulate patterns is extracted by adopting a reticulate pattern extraction model, and the reticulate pattern position information and the face image with the reticulate patterns are input into a reticulate pattern removal model to obtain a reticulate pattern removal face image; then, respectively extracting the reticulate-pattern-removed face image features of the reticulate-pattern-removed face image and the face image features of the face image to be compared without reticulate patterns by adopting a feature extraction model; and finally, confirming the result of the face verification by calculating the similarity between the descreened face image characteristics and the face image characteristics to be compared. According to the embodiment of the invention, the reticulate patterns of the face image with the reticulate patterns can be accurately removed, and the accuracy of face verification is effectively improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Based on the face verification method provided in the embodiment, the embodiment of the invention further provides an embodiment of a device for realizing the steps and the method in the embodiment of the method.
Fig. 2 is a schematic block diagram of a face verification apparatus corresponding to the face verification method in one-to-one embodiment. As shown in fig. 2, the face verification device includes an image to be compared acquisition module 10, a reticulate pattern position information extraction module 20, a descreened face image acquisition module 30, a descreened face image feature acquisition module 40, a face image feature acquisition module to be compared 50, and a verification module 60. The to-be-compared image obtaining module 10, the reticulate pattern position information extracting module 20, the reticulate pattern removing face image obtaining module 30, the reticulate pattern removing face image feature obtaining module 40, the to-be-compared face image feature obtaining module 50 and the verification module 60 have implementation functions corresponding to the steps corresponding to the face verification method in the embodiment one to one, and in order to avoid redundancy, detailed description is not provided in this embodiment.
And the image to be compared acquisition module 10 is used for acquiring the face image with the reticulate pattern and the face image to be compared without the reticulate pattern.
And the reticulate pattern position information extraction module 20 is configured to extract reticulate pattern position information in the human face image with reticulate pattern by using a reticulate pattern extraction model, where the reticulate pattern extraction model is obtained based on pixel difference value training.
And the descreening face image acquisition module 30 is configured to input the cob-webbing position information and the cob-webbing face image into a descreening model to obtain a descreening face image, where the descreening model is obtained by using generative confrontation network training.
And the descreening face image feature acquisition module 40 is configured to input the descreening face image into a feature extraction model to obtain descreening face image features, where the feature extraction model is obtained by using convolutional neural network training.
And the to-be-compared face image feature acquisition module 50 is used for inputting the to-be-compared face image without the reticulate pattern into the feature extraction model to obtain the to-be-compared face image features.
And the verification module 60 is configured to calculate a similarity between the descreened face image feature and the face image feature to be compared, and when the similarity is greater than a preset threshold, the face verification is successful.
Optionally, the face verification apparatus is further specifically configured to:
acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern human face training image of the same person and a corresponding human face training image without reticulate patterns;
reading pixel values in a face training image with reticulate patterns and a face training image without reticulate patterns, and normalizing the pixel values to be in a [0,1] interval;
adopting a pixel value after normalization of a face training image with reticulate patterns, correspondingly subtracting the pixel value after normalization of the face training image without the reticulate patterns based on a pixel distribution position, taking an absolute value of the difference value to obtain a pixel difference value, taking a part of the pixel difference value smaller than a preset critical value as 0, taking a part of the pixel difference value not smaller than the preset critical value as 1, and obtaining label reticulate pattern position information;
calculating the loss of the deep neural network model in the training process through a loss function according to the output of the deep neural network model and the label mesh position information acquired in advance, updating the network parameters of the deep neural network model by using the loss to obtain a mesh extraction model, wherein the loss function is expressed as
Figure BDA0002189563430000161
Figure BDA0002189563430000162
Where n denotes the total number of pixels, xiRepresenting deep neural network modesIth pixel value, y of the pattern outputiRepresenting the ith pixel value at the cross hatch position of the label.
Optionally, the face verification apparatus is further specifically configured to:
equal numbers of textured training samples and non-textured training samples were obtained.
And extracting the reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
And inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of the generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the reticulate pattern-free training sample.
Inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of the generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by the generator in the training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process.
And arithmetically adding the first generation loss value and the first judgment loss value to obtain a second generation loss value, and updating the network parameters of the generator by adopting the second generation loss value.
And updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
Optionally, the face verification apparatus is further specifically configured to:
equal numbers of textured training samples and non-textured training samples were obtained.
And extracting the reticulate pattern position information of the reticulate pattern training sample by adopting a reticulate pattern extraction model.
And inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of the generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the reticulate pattern-free training sample.
Inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of the generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss of the generator in the training process, and the second discrimination loss value is the loss of the discriminator in the training process.
And inputting the simulated face image into the feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features of the training sample without the reticulate pattern extracted by the extracted feature extraction model.
And arithmetically adding the simulation loss value, the first generation loss value and the first discrimination loss value to obtain a third generation loss value.
Arithmetically adding the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
updating the network parameters of the generator according to the third generated loss value.
And updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
Optionally, the face verification apparatus is further specifically configured to:
initializing a convolutional neural network, wherein the initialized weights of the convolutional neural network satisfy
Figure BDA0002189563430000181
Figure BDA0002189563430000182
Wherein n islRepresents the number of samples input at the l-th layer of the convolutional neural network, S () represents the variance operation, WlRepresents the weight of the l layer of the convolutional neural network,
Figure BDA0002189563430000183
denoted arbitrary, l denotes the l-th layer of the convolutional neural network.
Non-textured training samples were obtained.
And inputting the training sample without the reticulate pattern into the initialized convolutional neural network to obtain a loss value generated in the training process.
And updating network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
In the embodiment of the invention, firstly, a face image with reticulate patterns and a face image to be compared without the reticulate patterns are obtained, reticulate pattern position information in the face image with the reticulate patterns is extracted by adopting a reticulate pattern extraction model, and the reticulate pattern position information and the face image with the reticulate patterns are input into a reticulate pattern removal model to obtain a reticulate pattern removal face image; then, respectively extracting the reticulate-pattern-removed face image features of the reticulate-pattern-removed face image and the face image features of the face image to be compared without reticulate patterns by adopting a feature extraction model; and finally, confirming the result of the face verification by calculating the similarity between the descreened face image characteristics and the face image characteristics to be compared. According to the embodiment of the invention, the reticulate patterns of the face image with the reticulate patterns can be accurately removed, and the accuracy of face verification is effectively improved.
The present embodiment provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for face verification in the embodiments is implemented, and in order to avoid repetition, the details are not repeated here. Alternatively, the computer program is executed by the processor to implement the functions of each module/unit in the face verification apparatus in the embodiments, which are not repeated herein to avoid repetition.
Fig. 3 is a schematic diagram of a computer device according to an embodiment of the present invention. As shown in fig. 3, the computer device 70 of this embodiment includes: the processor 71, the memory 72, and the computer program 73 stored in the memory 72 and capable of running on the processor 71, where the computer program 73 is executed by the processor 71 to implement the face verification method in the embodiment, and in order to avoid repetition, details are not repeated herein. Alternatively, the computer program 73 is executed by the processor 71 to implement the functions of each model/unit in the face verification apparatus in the embodiment, which are not described herein again to avoid redundancy.
The computing device 70 may be a desktop computer, a notebook computer, a palm top computer, a cloud server, or other computing devices. The computer device 70 may include, but is not limited to, a processor 71, a memory 72. Those skilled in the art will appreciate that fig. 3 is merely an example of a computing device 70 and is not intended to limit computing device 70 and that it may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the computing device may also include input output devices, network access devices, buses, etc.
The Processor 71 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 72 may be an internal storage unit of the computer device 70, such as a hard disk or a memory of the computer device 70. The memory 72 may also be an external storage device of the computer device 70, such as a plug-in hard disk provided on the computer device 70, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 72 may also include both internal and external storage units of the computer device 70. The memory 72 is used to store computer programs and other programs and data required by the computer device. The memory 72 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A face verification method, comprising:
acquiring a human face image with reticulate patterns and a human face image to be compared without the reticulate patterns;
extracting reticulate pattern position information in the human face image with the reticulate pattern by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference value training;
inputting the reticulate pattern position information and the reticulate pattern-carrying face image into a reticulate pattern removing model to obtain a reticulate pattern removing face image, wherein the reticulate pattern removing model is obtained by adopting generative confrontation network training;
inputting the descreened face image into a feature extraction model to obtain descreened face image features, wherein the feature extraction model is obtained by adopting convolutional neural network training;
inputting the human face image to be compared without the reticulate pattern into the feature extraction model to obtain the features of the human face image to be compared;
and calculating the similarity between the descreened face image features and the face image features to be compared, and when the similarity is greater than a preset threshold value, successfully verifying the face.
2. The method of claim 1, wherein before the extracting the texture position information in the textured face image by using the texture extraction model, the method further comprises:
acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern human face training image of the same person and a corresponding human face training image without reticulate patterns;
reading pixel values in the face training image with the reticulate pattern and the face training image without the reticulate pattern, and normalizing the pixel values to be in a [0,1] interval;
adopting the normalized pixel value of the face training image with the reticulate pattern, correspondingly subtracting the normalized pixel value of the face training image without the reticulate pattern based on the pixel distribution position, taking the absolute value of the difference value to obtain a pixel difference value, taking the part of the pixel difference value which is smaller than a preset critical value as 0, and taking the part of the pixel difference value which is not smaller than the preset critical value as 1, and obtaining label reticulate pattern position information;
calculating the loss generated by the deep neural network model in the training process through a loss function according to the output of the deep neural network model and the label mesh position information acquired in advance, updating the network parameters of the deep neural network model by using the loss to obtain the mesh extraction model, wherein the loss function is expressed asWhere n denotes the total number of pixels, xiI-th pixel value, y, representing the output of the deep neural network modeliRepresenting the ith pixel value at the label screen location.
3. The method of claim 1, wherein the generative confrontation network comprises a generator and a discriminator, and before the inputting the texture position information and the textured face image into the descreen model to obtain the descreened face image, the method further comprises:
acquiring a meshed training sample and a non-meshed training sample with equal sample number;
extracting the reticulate pattern position information of the training sample with the reticulate pattern by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the training sample without the reticulate pattern;
inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of a generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is a loss value caused by the discriminator in the training process, and the second discrimination loss value is a loss value caused by the discriminator in the training process;
arithmetically adding the first generation loss value and the first judgment loss value to obtain a second generation loss value, and updating the network parameters of the generator by adopting the second generation loss value;
and updating the network parameters of the discriminator by adopting the second discrimination loss value, and obtaining the descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
4. The method of claim 1, wherein the generative confrontation network comprises a generator and a discriminator, and before the inputting the texture position information and the textured face image into the descreen model to obtain the descreened face image, the method further comprises:
acquiring a meshed training sample and a non-meshed training sample with equal sample number;
extracting the reticulate pattern position information of the training sample with the reticulate pattern by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the training sample without the reticulate pattern;
inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of a generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss of the discriminator in the training process, and the second discrimination loss value is the loss of the discriminator in the training process;
inputting the simulated face image into the feature extraction model, and obtaining a simulation loss value according to the simulated face features extracted by the feature extraction model and the features of the training sample without the reticulate pattern extracted by the feature extraction model;
arithmetically adding the simulation loss value, the first generation loss value and the first discrimination loss value to obtain a third generation loss value;
arithmetically adding the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
updating a network parameter of the generator according to the third generation loss value;
and updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
5. The method according to any one of claims 1-4, wherein the descreened face image features are obtained by inputting the descreened face image into a feature extraction model, and the method further comprises:
initializing a convolutional neural network, wherein the initialized weights of the convolutional neural network satisfy
Figure FDA0002189563420000031
Wherein n islRepresents the number of samples input at the l-th layer of the convolutional neural network, S () represents a variance operation, WlRepresenting weights of the l-th layer of the convolutional neural network,
Figure FDA0002189563420000032
representing arbitrary,/, the l-th layer of the convolutional neural network;
acquiring a training sample without reticulate patterns;
inputting the training sample without the reticulate pattern into the initialized convolutional neural network to obtain a loss value generated in the training process;
and updating the network parameters of the convolutional neural network according to the loss value generated in the training process to obtain a feature extraction model.
6. An apparatus for face authentication, the apparatus comprising:
the comparison image acquisition module is used for acquiring a face image with reticulate patterns and a face image to be compared without the reticulate patterns;
the reticulate pattern position information extraction module is used for extracting reticulate pattern position information in the human face image with the reticulate pattern by adopting a reticulate pattern extraction model, wherein the reticulate pattern extraction model is obtained based on pixel difference value training;
the cob-webbing-removing face image acquisition module is used for inputting the cob-webbing position information and the cob-webbing-carrying face image into a cob-webbing-removing model to obtain a cob-webbing-removing face image, wherein the cob-webbing-removing model is obtained by adopting generative confrontation network training;
the descreening face image feature acquisition module is used for inputting the descreening face image into a feature extraction model to obtain descreening face image features, wherein the feature extraction model is obtained by adopting convolutional neural network training;
the to-be-compared face image feature acquisition module is used for inputting the to-be-compared face image without the reticulate pattern into the feature extraction model to obtain the to-be-compared face image features;
and the verification module is used for calculating the similarity between the descreened face image characteristics and the face image characteristics to be compared, and when the similarity is greater than a preset threshold value, the face verification is successful.
7. The apparatus of claim 6, wherein the apparatus is further specifically configured to:
acquiring a reticulate pattern training sample set, wherein each reticulate pattern training sample in the reticulate pattern training sample set comprises a reticulate pattern human face training image of the same person and a corresponding human face training image without reticulate patterns;
reading pixel values in the face training image with the reticulate pattern and the face training image without the reticulate pattern, and normalizing the pixel values to be in a [0,1] interval;
adopting the normalized pixel value of the face training image with the reticulate pattern, correspondingly subtracting the normalized pixel value of the face training image without the reticulate pattern based on the pixel distribution position, taking the absolute value of the difference value to obtain a pixel difference value, taking the part of the pixel difference value which is smaller than a preset critical value as 0, and taking the part of the pixel difference value which is not smaller than the preset critical value as 1, and obtaining label reticulate pattern position information;
calculating the loss generated by the deep neural network model in the training process through a loss function according to the output of the deep neural network model and the label mesh position information acquired in advance, updating the network parameters of the deep neural network model by using the loss to obtain the mesh extraction model, wherein the loss function is expressed as
Figure FDA0002189563420000051
Where n denotes the total number of pixels, xiI-th pixel value, y, representing the output of the deep neural network modeliRepresenting the ith pixel value at the label screen location.
8. The apparatus of claim 6, wherein the apparatus is further specifically configured to:
acquiring a meshed training sample and a non-meshed training sample with equal sample number;
extracting the reticulate pattern position information of the training sample with the reticulate pattern by adopting the reticulate pattern extraction model;
inputting the reticulate pattern position information of the reticulate pattern training sample and the reticulate pattern training sample into a generator of a generating type countermeasure network to generate a simulated face image, and obtaining a first generation loss value according to the simulated face image and the training sample without the reticulate pattern;
inputting the simulated face image and the training sample without the reticulate pattern into a discriminator of a generating type confrontation network to obtain a discrimination result, and obtaining a first discrimination loss value and a second discrimination loss value according to the discrimination result, wherein the first discrimination loss value is the loss of the discriminator in the training process, and the second discrimination loss value is the loss of the discriminator in the training process;
inputting the simulated face image into the feature extraction model to obtain a simulation loss value;
arithmetically adding the simulation loss value, the first generation loss value and the first discrimination loss value to obtain a third generation loss value;
arithmetically adding the simulation loss value and the second discrimination loss value to obtain a third discrimination loss value;
updating a network parameter of the generator according to the third generation loss value;
and updating the network parameters of the discriminator according to the third discrimination loss value, and obtaining a descreening model according to the updated network parameters of the generator and the updated network parameters of the discriminator.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the face verification method according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the face verification method according to any one of claims 1 to 5.
CN201910827470.8A 2019-09-03 2019-09-03 Face verification method, device, computer equipment and storage medium Active CN110765843B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910827470.8A CN110765843B (en) 2019-09-03 2019-09-03 Face verification method, device, computer equipment and storage medium
PCT/CN2019/117774 WO2021042544A1 (en) 2019-09-03 2019-11-13 Facial verification method and apparatus based on mesh removal model, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910827470.8A CN110765843B (en) 2019-09-03 2019-09-03 Face verification method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110765843A true CN110765843A (en) 2020-02-07
CN110765843B CN110765843B (en) 2023-09-22

Family

ID=69330204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910827470.8A Active CN110765843B (en) 2019-09-03 2019-09-03 Face verification method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110765843B (en)
WO (1) WO2021042544A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612799A (en) * 2020-05-15 2020-09-01 中南大学 Face data pair-oriented incomplete reticulate pattern face repairing method and system and storage medium
CN113486688A (en) * 2020-05-27 2021-10-08 海信集团有限公司 Face recognition method and intelligent device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095219A (en) * 2021-04-12 2021-07-09 中国工商银行股份有限公司 Reticulate pattern face recognition method and device
CN113469898B (en) * 2021-06-02 2024-07-19 北京邮电大学 Image de-distortion method based on deep learning and related equipment
CN114299590A (en) * 2021-12-31 2022-04-08 中国科学技术大学 Training method of face completion model, face completion method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device
CN108734673A (en) * 2018-04-20 2018-11-02 平安科技(深圳)有限公司 Descreening systematic training method, descreening method, apparatus, equipment and medium
US20190108396A1 (en) * 2017-10-11 2019-04-11 Aquifi, Inc. Systems and methods for object identification

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416343B (en) * 2018-06-14 2020-12-22 北京远鉴信息技术有限公司 Face image recognition method and device
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face
CN110032931B (en) * 2019-03-01 2023-06-13 创新先进技术有限公司 Method and device for generating countermeasure network training and removing reticulation and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device
US20190108396A1 (en) * 2017-10-11 2019-04-11 Aquifi, Inc. Systems and methods for object identification
CN108734673A (en) * 2018-04-20 2018-11-02 平安科技(深圳)有限公司 Descreening systematic training method, descreening method, apparatus, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612799A (en) * 2020-05-15 2020-09-01 中南大学 Face data pair-oriented incomplete reticulate pattern face repairing method and system and storage medium
CN113486688A (en) * 2020-05-27 2021-10-08 海信集团有限公司 Face recognition method and intelligent device

Also Published As

Publication number Publication date
CN110765843B (en) 2023-09-22
WO2021042544A1 (en) 2021-03-11

Similar Documents

Publication Publication Date Title
CN110765843B (en) Face verification method, device, computer equipment and storage medium
CN110020620B (en) Face recognition method, device and equipment under large posture
CN110020592B (en) Object detection model training method, device, computer equipment and storage medium
CN109829448B (en) Face recognition method, face recognition device and storage medium
JP6812086B2 (en) Training method for reticulated pattern removal system, reticulated pattern removal method, equipment, equipment and media
CN109272016B (en) Target detection method, device, terminal equipment and computer readable storage medium
US11080833B2 (en) Image manipulation using deep learning techniques in a patch matching operation
CN113298152B (en) Model training method, device, terminal equipment and computer readable storage medium
CN111914908A (en) Image recognition model training method, image recognition method and related equipment
CN112232506A (en) Network model training method, image target recognition method, device and electronic equipment
CN114155365A (en) Model training method, image processing method and related device
CN111488810A (en) Face recognition method and device, terminal equipment and computer readable medium
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
JP2015036939A (en) Feature extraction program and information processing apparatus
CN113221601B (en) Character recognition method, device and computer readable storage medium
CN114330565A (en) Face recognition method and device
CN109359542B (en) Vehicle damage level determining method based on neural network and terminal equipment
CN114299555A (en) Fingerprint identification method, fingerprint module and electronic equipment
CN112509154A (en) Training method of image generation model, image generation method and device
CN116580208A (en) Image processing method, image model training method, device, medium and equipment
CN114820755B (en) Depth map estimation method and system
CN113627404B (en) High-generalization face replacement method and device based on causal inference and electronic equipment
CN115620082A (en) Model training method, head posture estimation method, electronic device, and storage medium
CN112257561B (en) Human face living body detection method and device, machine readable medium and equipment
TWI775038B (en) Method and device for recognizing character and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant