CN111598051A - Face verification method, device and equipment and readable storage medium - Google Patents

Face verification method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN111598051A
CN111598051A CN202010547280.3A CN202010547280A CN111598051A CN 111598051 A CN111598051 A CN 111598051A CN 202010547280 A CN202010547280 A CN 202010547280A CN 111598051 A CN111598051 A CN 111598051A
Authority
CN
China
Prior art keywords
image
face
modified
feature information
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010547280.3A
Other languages
Chinese (zh)
Other versions
CN111598051B (en
Inventor
尹邦杰
姚太平
吴双
孟嘉
丁守鸿
李季檩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010547280.3A priority Critical patent/CN111598051B/en
Publication of CN111598051A publication Critical patent/CN111598051A/en
Application granted granted Critical
Publication of CN111598051B publication Critical patent/CN111598051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a face verification method, a face verification device, face verification equipment and a readable storage medium; the embodiment of the application is related to identity authentication, and the embodiment of the application can be used for extracting the face part in the face image to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image.

Description

Face verification method, device and equipment and readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a face verification method, a face verification device, face verification equipment and a readable storage medium.
Background
Because the face verification technology (such as face recognition) has the characteristics of accurate data, high safety factor, convenient use and the like, the face verification technology is widely applied to various industries, for example, the face verification unlocking, face verification login, remote face verification, face brushing access control system, off-line face brushing payment, automatic face brushing clearance and the like.
In the process of face verification of a user, the passing rate of the face verification is related to the main part characteristics of the face of the user.
Disclosure of Invention
The embodiment of the application provides a face verification method, a face verification device, face verification equipment and a readable storage medium, and the passing rate of face verification can be improved.
The embodiment of the application provides a face verification method, which comprises the following steps:
carrying out image extraction on a face part in a face image to obtain a part image corresponding to the face part and an extracted face image of the face image;
performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, wherein the part modification image is an image obtained by modifying the face part;
fusing the part modification image and the extracted face image to obtain a fused modification face image;
matching the fused modified face image with a face verification image;
and determining a face verification result according to the matching result of the fused modified face image and the face verification image.
Correspondingly, the embodiment of the present application further provides a face verification apparatus, including:
the face image extraction unit is used for extracting a face part in a face image to obtain a part image corresponding to the face part and an extracted face image of the face image;
the encoding unit is used for carrying out feature encoding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer;
a decoding unit configured to perform feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, where the part modification image is an image in which the face part is modified;
the fusion unit is used for fusing the part modification image and the extracted face image to obtain a fused modification face image;
the matching unit is used for matching the fused modified face image with a face verification image;
and the determining unit is used for determining the face verification result according to the matching result of the fused modified face image and the face verification image.
In some embodiments, the extraction unit is configured to:
carrying out key point detection on an initial face part in a face image to obtain a part key point position corresponding to the initial face part;
and performing image extraction on the face part in the face image based on the position of the part key point to obtain a part image corresponding to the face part and an extracted face image of the face image.
In some embodiments, the fusion unit comprises:
a fusion subunit, configured to fuse the part-modified image and the extracted face image to obtain an initial fused modified face image;
and the correcting subunit is used for correcting the initial fused modified face image to obtain a fused modified face image.
In some embodiments, the correction subunit is configured to:
calculating a pixel gradient of the part image and a pixel gradient of the extracted face image based on the pixels of the part image and the pixels of the extracted face image;
calculating a pixel gradient of the initial post-fusion modified face image based on the pixel gradient of the part image and the pixel gradient of the extracted face image;
and adjusting the pixels of the initially fused modified face image according to the pixel gradient of the initially fused modified face image and the pixel gradient of the part image to obtain the fused modified face image.
In some embodiments, the matching unit is configured to:
respectively extracting modified face features corresponding to the modified face in the fused modified face image and verification face features corresponding to the verification face in the face verification image;
calculating the similarity of the fused modified face image and a face verification image based on the modified face features and the verification face features;
and matching the fused modified face image with a face verification image based on the similarity.
In some embodiments, the encoding unit is configured to:
performing multilayer convolution processing on the part image to obtain output layer characteristic information corresponding to the part image and intermediate layer characteristic information except the output layer;
the decoding unit is configured to:
performing multilayer deconvolution processing on the output layer feature information to obtain multilayer fused feature information corresponding to the position image, wherein each layer of fused feature information is obtained by fusing the feature information output by the adjacent deconvolution layer and the intermediate layer feature information;
and determining a part modification image corresponding to the part image based on the multi-layer fused feature information.
In some embodiments, the partial image includes a decored partial image, the encoding unit is configured to:
performing feature coding on the unmodified part image through a coding module of a first preset generator in a first preset generation type countermeasure network to obtain multilayer feature information corresponding to the unmodified part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
the decoding unit is configured to:
and performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the first preset generator to generate a modified bit image corresponding to the modified bit image.
In some embodiments, the face verification device further comprises a first training unit comprising:
the sample acquiring subunit is used for acquiring a first sample face image, a second sample face image and a sample decorated face image;
a sample extracting subunit, configured to perform image extraction on the sample de-modified face part of the first sample face image and the sample modified face part of the sample modified face image, respectively, to obtain a sample de-modified face part image corresponding to the sample de-modified face part and a sample extracted face image of the first sample face image, where the sample modified face part image corresponds to the sample modified face part;
the sample generation subunit is used for generating a predicted modified part image corresponding to the sample de-modified part image by adopting a first generator of a first generation countermeasure network;
and the sample training subunit is used for training the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted face image and the second sample face image to obtain a first preset generative confrontation network.
In some embodiments, the first generative confrontation network comprises a first discriminator, the sample training subunit to:
judging the predicted modified part image and the sample modified part image by adopting a first discriminator of the first generative countermeasure network to obtain a judgment result;
fusing the predicted modified facial image with the sample-extracted facial image to obtain a sample-fused modified facial image;
matching the sample fused modified face image with the second sample face image to obtain a sample matching result;
and training the first generative confrontation network based on the sample matching result and the discrimination result to obtain a first preset generative confrontation network.
In some embodiments, the sample training subunit is to:
acquiring similarity loss based on the similarity between the sample fused modified face image and the second sample face image;
acquiring the discrimination loss of the first discriminator based on the discrimination results of the predicted modified bit image and the sample modified bit image;
and adjusting parameters of the first generative countermeasure network according to the similarity loss and the discrimination loss to obtain a first preset generative countermeasure network.
In some embodiments, the sample training subunit is to:
acquiring a first expected probability of the predicted modified bit image and a second expected probability of the sample modified bit image;
respectively calculating a first actual probability of the predicted modified posterior bit image and a second actual probability of the sample modified posterior bit image;
and calculating the discrimination loss of the first discriminator according to the first expected probability, the second expected probability, the first actual probability and the second actual probability.
In some embodiments, the bit-image further comprises a modified post-bit-image, and the encoding unit is further configured to:
performing feature decoding on the modified rear-bit image through a coding module of a second preset generator in a second preset generation type countermeasure network to obtain multilayer feature information corresponding to the modified rear-bit image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
the decoding unit is further configured to:
performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the second preset generator to generate a modified part image corresponding to the modified part image.
In some embodiments, the face verification apparatus further comprises a second training unit for:
acquiring a sample target modified part image corresponding to the modified face part of the sample target in the sample modified face image and a sample target modified part image corresponding to the modified face part of the sample target in the sample face image;
generating a predicted target decoratied partial image corresponding to the sample target decoratied posterior partial image by adopting a second generator of a second generating countermeasure network;
adopting a second discriminator of a second generating countermeasure network to discriminate the predicted target decoratied position image and the sample target decoratied position image to obtain a target discrimination result;
and adjusting parameters of the second generative countermeasure network according to the target judgment result to obtain a second preset generative countermeasure network.
Accordingly, the present application further provides a computer device, including a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of any one of the face verification methods provided in the embodiments of the present application.
In addition, the present application also provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps in any one of the face verification methods provided by the present application.
The face image extraction method and the face image extraction device can be used for extracting the face part in the face image to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image. According to the scheme, the part modification image corresponding to the face part in the face image can be generated based on the part image corresponding to the face part in the face image, the part modification image and the extracted face image of the face image are fused to obtain the fused modified face image, the fused modified face image is matched with the face verification image, when the similarity between the fused modified face image and the face verification image is larger than a preset similarity threshold value, the face verification can be determined to be passed, and therefore the passing rate of the face verification is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a scene schematic diagram of a face verification method provided in an embodiment of the present application;
FIG. 1b is a schematic flow chart of a face verification method according to an embodiment of the present application;
fig. 2a is another schematic flow chart of a face verification method provided in an embodiment of the present application;
FIG. 2b is a schematic diagram of a training architecture of a generative confrontation network for generating makeup according to an embodiment of the present application;
FIG. 2c is a schematic diagram of a training architecture of a generative confrontation network for removing makeup according to an embodiment of the present application;
FIG. 3a is a schematic structural diagram of a face authentication device according to an embodiment of the present application;
FIG. 3b is a schematic structural diagram of another face verification apparatus provided in the embodiments of the present application;
FIG. 3c is a schematic structural diagram of another face verification apparatus provided in the embodiments of the present application;
FIG. 3d is a schematic structural diagram of another face verification apparatus provided in the embodiments of the present application;
fig. 4 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a face verification method, a face verification device, computer equipment and a computer-readable storage medium. Specifically, the face verification method in the embodiment of the present application may be executed by a computer device, where the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The face verification scheme provided by the embodiment of the application relates to Computer Vision technology (CV) of artificial intelligence. The face verification method comprises the steps of generating a part modification image corresponding to a face part in a face image through an artificial intelligence computer vision technology, fusing the part modification image with an extracted face image of the face image to obtain a fused modification face image, matching the fused modification face image with a face verification image, and determining a face verification result based on a matching result.
Computer vision is a science for researching how to make a machine look, and in particular, it refers to that a camera and a computer are used to replace human eyes to perform machine vision of identifying, tracking and measuring a target, and further to perform graphic processing, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
For example, referring to fig. 1a, taking as an example that the face verification apparatus is integrated in a computer device, the computer device may perform image extraction on a face part in a face image, to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
In the present embodiment, the description will be made from the perspective of a face authentication apparatus, which may be specifically integrated in a computer device, for example, the face authentication apparatus may be a physical apparatus provided in the computer device, or the face authentication apparatus may be integrated in the computer device in the form of a client. The computer device may be a server or a terminal.
As shown in fig. 1b, the specific flow of the face verification method may be as follows:
101. the face image is subjected to image extraction to obtain a part image corresponding to the face part and an extracted face image of the face image.
The face image is an image including each part of the face, and may be, for example, an image including each part of the face, such as eyes, a nose, a mouth, ears, or eyebrows.
The image extraction of the face part in the face image may be image cutting performed on the part of the face in the image, such as the eyes, eyebrows, or mouth, so as to obtain a corresponding part image, such as the eye part, nose part, or mouth part; a part image corresponding to the part of the face may be extracted from the face image without destroying the face image. The portion image may include a portion image corresponding to one or more facial parts. After the face part in the face image is extracted, in addition to obtaining a part image corresponding to the face part, the face image after the part image is extracted is used as an extracted face image of the face image, wherein the extracted face image may be a face image left after the original face image is subjected to image cutting, or may be the original face image.
In an embodiment, the image extraction of the face part in the face image may be performed by detecting a plurality of key points representing each part of the face, for example, by using a face registration algorithm, locating key point positions of parts on the face, such as key point coordinates of five sense organs of the face, and performing image extraction of the face part in the face image based on the located key point positions of the parts, specifically, the step "performing image extraction of the face part in the face image to obtain a part image corresponding to the face part and an extracted face image of the face image" may include:
carrying out key point detection on an initial face part in the face image to obtain a part key point position corresponding to the initial face part;
and performing image extraction on the face part in the face image based on the position of the part key point to obtain a part image corresponding to the face part and an extracted face image of the face image.
The face registration algorithm may calculate the key point positions of a plurality of key points representing each part of the face in the face image, where the number of the key points may be a preset fixed value, and may include, for example, 5 points, 68 points, 90 points, and the like, and the number of the key points may be set according to the requirements of practical applications, which is not described herein again.
The face image may be acquired by a local (i.e. the face verification apparatus) camera component, such as a camera, or may be acquired by receiving a face image sent by another device, such as another terminal, and so on.
102. And performing feature coding on the part image to obtain multi-layer feature information corresponding to the part image, wherein the multi-layer feature information comprises output layer feature information and intermediate layer feature information except the output layer.
In the embodiment of the present application, the part image may be encoded and decoded based on a Convolutional Neural Network (CNN), so as to generate a part modified image corresponding to the part image.
In an embodiment, in order to improve the accuracy and efficiency of feature extraction, so that the generated modified region image is more vivid, the region image may be feature-encoded by an encoding module (Encoder) of a Generator (Generator) in a Generic Adaptation Network (GAN) to obtain multi-layer feature information corresponding to the region image, such as output layer feature information and middle layer feature information except for the output layer. Specifically, the step of "performing feature coding on the part image to obtain multi-layer feature information corresponding to the part image" may include:
performing multilayer convolution processing on the part image to obtain output layer characteristic information corresponding to the part image and intermediate layer characteristic information except the output layer;
then, at this time, the step 103 "feature-decoding the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image" may include:
performing multilayer deconvolution processing on the output layer feature information to obtain multilayer fused feature information corresponding to the position image, wherein each layer of fused feature information is obtained by fusing the feature information output by the adjacent deconvolution layer and the middle layer feature information;
and determining a part modification image corresponding to the part image based on the multi-layer fused feature information.
The generative confrontation network (GAN) is a neural network model mainly composed of a Generator (Generator) and a Discriminator (Discriminator), wherein the Generator is used for continuously learning the probability distribution of real data in a training set, and the goal is to convert input random noise into images which can be falsified (i.e. the generated images are more similar to the images in the training set, and the better); the function of the discriminator is to judge whether the image generated by the generator is a real image, and the aim is to distinguish the false image generated by the generator from the true image in the training set, so that the generator can finally generate the image which is false or true through the interactive training.
The multi-layer deconvolution processing may be performed on the output layer feature information through a decoding module (Decoder) of a generator in the generative countermeasure network, so as to obtain multi-layer fused feature information corresponding to the bit image, where each layer of fused feature information is obtained by fusing feature information output by adjacent deconvolution layers and intermediate layer feature information.
The encoding module comprises one or more convolution layers and is used for carrying out multilayer convolution operation on an input part image, namely carrying out feature extraction on the part image so as to obtain semantic features corresponding to the part image; the decoding module corresponds to the encoding module and comprises one or more deconvolution layers, and multilayer fused feature information corresponding to the position image is obtained by performing multilayer deconvolution operation on output layer feature information in the extracted multilayer feature information, namely for each deconvolution layer, feature information output by a deconvolution layer adjacent to the deconvolution layer is fused with intermediate layer feature information, so that fused feature information of each layer is obtained.
In the generative countermeasure network, the multi-layer feature information may be expressed as a multi-layer feature image, that is, an output layer feature image corresponding to the output layer, an intermediate layer feature image corresponding to the intermediate layer, and multi-layer fused feature information corresponding to the site image, or may be expressed as a multi-layer fused feature image. Based on the multi-layer fused feature information corresponding to the part image, such as the fused feature image, the part decoration image corresponding to the part image can be determined, for example, the fused feature image finally output by the encoding module can be used as the part decoration image corresponding to the part image.
103. Feature decoding is performed on the output layer feature information based on the intermediate layer feature information to generate a part-modified image corresponding to a part image, the part-modified image being an image in which a face part is modified.
The part-modified image is an image obtained by modifying a face part, and may be an image corresponding to the face part after makeup, or may be an image corresponding to the face part after makeup removal (i.e., after makeup removal), that is: if the part image is an image corresponding to a facial part which is removed (made up) from the face, the part modification image is an image corresponding to the facial part which is made up (made up); on the other hand, if the part image is an image corresponding to the modified face part, the part-modified image is an image corresponding to the unmodified face part.
In the embodiment of the application, the makeup is to generate a face part with a makeup effect for an un-made face part in a part image, and the makeup removal is to perform makeup removal on the face part with the makeup, so as to obtain the face part with the makeup removed (i.e. un-made).
In an embodiment, the generating a part modification image corresponding to a part image by using a generator in a generative countermeasure network, where the part image includes a de-modified part image, and specifically, the step 102 "performing feature coding on the part image to obtain multi-layer feature information corresponding to the part image" may include:
performing feature coding on the unmodified position image through a coding module of a first preset generator in the first preset generation type countermeasure network to obtain multilayer feature information corresponding to the unmodified position image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image, including:
and performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the first preset generator to generate a modified rear bit image corresponding to the modified part image.
The first preset generative confrontation network may be obtained through training, and specifically, the face verification method may further include:
acquiring a first sample face image, a second sample face image and a sample modified face image;
respectively carrying out image extraction on the sample de-modified face part of the first sample face image and the sample modified face part of the sample modified face image to obtain a sample de-modified part image corresponding to the sample de-modified face part and a sample extracted face image of the first sample face image, wherein the sample modified back part image corresponding to the sample modified face part;
generating a predicted modified posterior bit image corresponding to the sample modified bit image by using a first generator of a first generation countermeasure network;
training the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted face image and the second sample face image to obtain a first preset generative confrontation network.
In one embodiment, the first generative confrontation network may further include a first discriminator, by which the predicted modified posterior bit image and the sample modified posterior bit image may be discriminated, and the first generative confrontation network may be trained based on the discrimination result, and specifically, the step of training the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted face image, and the second sample face image to obtain the first predetermined generative confrontation network may include:
judging the predicted modified rear-bit image and the sample modified rear-bit image by adopting a first discriminator of a first generative countermeasure network to obtain a judgment result;
fusing the predicted modified face image with the face image extracted from the sample to obtain a sample fused modified face image;
matching the modified face image after the sample fusion with a second sample face image to obtain a sample matching result;
and training the first generative confrontation network based on the sample matching result and the judgment result to obtain a first preset generative confrontation network.
The modified face image after the sample fusion is matched with the second sample face image, and a face recognition model can be used for matching, for example, facial features of faces in the two images can be extracted through the face recognition model, the similarity between the two images is calculated based on the facial features, and the two images are matched according to the similarity.
In an embodiment, according to a sample matching result, a similarity loss between the sample fused modified face image and the second sample face image may be obtained, and according to a discrimination result, a discrimination loss between the predicted modified face image and the sample modified face image may be obtained, and specifically, the step of training the first generative confrontation network based on the sample matching result and the discrimination result to obtain the first preset confrontation network may include:
acquiring similarity loss based on the similarity between the modified face image after sample fusion and the second sample face image;
acquiring the discrimination loss of the first discriminator based on the discrimination results of the predicted modified posterior bit image and the sample modified posterior bit image;
and adjusting parameters of the first generative countermeasure network according to the similarity loss and the discrimination loss to obtain a first preset generative countermeasure network.
For example, according to the similarity loss and the discrimination loss, an optimizer such as a random gradient descent algorithm or Adam may be used to jointly optimize the similarity loss and the discrimination loss, and adjust the network parameters of the first generative confrontation network, so as to obtain a trained first preset generative confrontation network.
In an embodiment, the discrimination loss of the first discriminator may be obtained by calculation, and specifically, the step "obtaining the discrimination loss of the first discriminator based on the discrimination results of the predicted modified posterior bit image and the sample modified posterior bit image" may include:
acquiring a first expected probability of predicting a modified posterior bit image and a second expected probability of modifying the posterior bit image by a sample;
respectively calculating a first actual probability of predicting the modified rear bit image and a second actual probability of sample modification rear bit image;
and calculating the discrimination loss of the first discriminator according to the first expected probability, the second expected probability, the first actual probability and the second actual probability.
In the process of training GAN, the discrimination loss of the first discriminator can be calculated based on an interactive training mode, for example, when the discriminator is trained, the parameters of the generator are ensured to be unchanged, the closer the discrimination score (i.e. the actual probability) of the generated predicted modified posterior bit image to 0 is, the better, and the closer the discrimination score of the true sample modified posterior bit image to 1 is, the better; and when the generator is trained, the discrimination score of the generated predicted modified part image in the discriminator is better as being closer to 1, and the discrimination loss of the discriminator can be calculated according to the actual discrimination score and the expected discrimination score of the predicted modified part image and the actual discrimination score and the expected discrimination score of the sample modified part image in the process through the countermeasure training, so that the discriminator is more and more difficult to distinguish the difference between the predicted modified part image generated by the generator and the real sample modified part image. The first expected probability of predicting the modified bit image may be 0, and the second expected probability of predicting the modified bit image may be 1, where the expected probabilities may be set according to requirements of practical applications.
In an embodiment, the step 102 of performing feature coding on the part image to obtain multi-layer feature information corresponding to the part image may further include:
performing feature decoding on the modified rear-bit image through a coding module of a second preset generator in a second preset generation type countermeasure network to obtain multilayer feature information corresponding to the modified rear-bit image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image, including:
performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the second preset generator to generate a modified part image corresponding to the modified part image.
The second predetermined generative confrontation network may be obtained through training, and specifically, the face verification method may further include:
acquiring a sample target modified part image corresponding to the modified face part of the sample target in the sample modified face image and a sample target modified part image corresponding to the modified face part of the sample target in the sample face image;
generating a predicted target decoratied partial image corresponding to the sample target decoratied posterior bit image by adopting a second generator of a second generating countermeasure network;
adopting a second discriminator of a second generating countermeasure network to discriminate the predicted target decoratied position image and the sample target decoratied position image to obtain a target discrimination result;
and adjusting the parameters of the second generative countermeasure network according to the target judgment result to obtain a second preset generative countermeasure network.
104. And fusing the part modification image and the extracted face image to obtain a fused modification face image.
The method for fusing the part decoration image and the extracted face image may be various, and in order to make the fused image more real and natural, the fused image may be modified, and specifically, the step "fusing the part decoration image and the extracted face image to obtain the fused face image" may include:
fusing the part modification image and the extracted face image to obtain an initial fused modification face image;
and correcting the initial modified face image after fusion to obtain the modified face image after fusion.
The method for correcting the initially fused modified face image may include a plurality of ways, and the embodiment of the present application may adopt an image fusion algorithm to correct the initially fused modified face image, for example, a Poisson Blending algorithm may be adopted to perform softening and smoothing processing on an edge where a part modified image in the initially fused modified face image coincides with the extracted face image, specifically, the step "correcting the initially fused modified face image to obtain the fused modified face image" may include:
calculating a pixel gradient of the part image and a pixel gradient of the extracted face image based on the pixels of the part image and the pixels of the extracted face image;
calculating the pixel gradient of the modified face image after the initial fusion based on the pixel gradient of the part image and the pixel gradient of the extracted face image;
and adjusting the pixels of the initially fused modified face image according to the pixel gradient of the initially fused modified face image and the pixel gradient of the part image to obtain the fused modified face image.
105. And matching the fused modified face image with the face verification image.
The face verification image is a reference image used for verifying whether the fused modified face image of the face image can pass face verification. In the embodiment of the application, the face verification image is used as a standard and a basis for judging whether the face verification can pass, and the fused modified face image is matched with the face verification image to determine whether the fused modified face image can pass the face verification.
In the embodiment of the application, the fused modified face image and the face verification image may be matched through the face recognition model, for example, facial features corresponding to faces in the two images may be extracted through the face recognition model, similarity (such as cosine similarity) between the two images is calculated based on the extracted facial features, and the fused modified face image and the face verification image are matched through the similarity. Specifically, the step of "matching the fused modified face image with the face verification image" may include:
respectively extracting modified face features corresponding to the modified face in the fused modified face image and verification face features corresponding to the verification face in the face verification image;
calculating the similarity of the fused modified face image and the face verification image based on the modified face features and the verification face features;
and matching the fused modified face image with the face verification image based on the similarity.
106. And determining a face verification result according to the matching result of the fused modified face image and the face verification image.
For example, when the similarity between the fused modified face image and the face verification image is higher than a preset similarity threshold, determining that the fused processed face image is successfully matched with the face verification image, and determining that the face verification passes; and when the similarity between the fused modified face image and the face verification image is lower than a preset similarity threshold, determining that the matching between the fused processed face image and the face verification image fails, and determining that the face verification fails.
In view of the wide application of the current face recognition technology, the technology of resisting attacks on the deep neural network in the face recognition system brings potential risks. The anti-attack technology utilizes certain defects (such as sensitivity to noise and easiness in outputting wrong results) naturally existing in a deep neural network to cause the situation that a face recognition system carries out false recognition. In this regard, the face verification scheme of the embodiment of the present application may attack the face recognition system in a new form of counter attack, for example, modify (e.g., make up) the face part in the unapplied face image to pass the verification of the face recognition system. Because the universality and the universality of the face of the person who makes up at present, compared with the traditional scheme, the attack form based on the makeup of the scheme has more concealment and authenticity, so that the face recognition system is difficult to defend. Correspondingly, in order to promote defense, the face verification scheme of the embodiment of the application can also perform modification (such as makeup removal) on the face part in the face image after makeup is performed, so that the face image after makeup removal loses attack characteristics, and the defense purpose is achieved. It should be noted that, with the adoption of the anti-attack technology, the discovery and repair of the weak points of the current face recognition system based on the deep neural network are the core guide of the scheme.
As can be seen from the above, the embodiment of the present application may perform image extraction on a face part in a face image to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image. According to the scheme, the part modification image corresponding to the face part in the face image can be generated based on the part image corresponding to the face part in the face image, the part modification image and the extracted face image of the face image are fused to obtain the fused modified face image, the fused modified face image is matched with the face verification image, when the similarity between the fused modified face image and the face verification image is larger than a preset similarity threshold value, the face verification can be determined to be passed, and therefore the passing rate of the face verification is improved. For example, a makeup-removed face image having a high similarity to the face verification image may be generated based on the makeup-removed face image (specifically, the makeup-removed face image may be generated based on the makeup-removed face image).
The method described in the above embodiments is further illustrated in detail by way of example.
In the present embodiment, the face authentication apparatus will be described by taking as an example that it is specifically integrated in a computer device.
As shown in fig. 2a, a specific flow of a face verification method may be as follows:
201. the computer equipment carries out image extraction on the face part in the face image to obtain a part image corresponding to the face part and an extracted face image of the face image.
The computer device may perform image extraction on the face part in the face image, may perform image clipping on the part of the face in the image, such as the eyes, eyebrows, or mouth, to obtain a corresponding part image, such as the eye part, nose part, or mouth part, or may extract a part image corresponding to the part of the face from the face image without destroying the face image (i.e., without changing the face image). The portion image may include a portion image corresponding to one or more facial parts. After the facial parts in the facial image are subjected to image extraction, a part image corresponding to the facial parts and the facial image after the part image is extracted are obtained.
In an embodiment, the image extraction is performed on the face part in the face image, and multiple key points representing various parts of the face may be detected, for example, the key point positions of the upper parts of the face, such as the key point coordinates of the five sense organs of the face, may be located by a face registration algorithm, and the image extraction is performed on the face part in the face image based on the located key point positions of the parts, so as to obtain a part image corresponding to the face part and an extracted face image of the face image.
For example, an eye portion in the face image may be subjected to image extraction, so as to obtain an eye image corresponding to the eye portion and an extracted face image of the face image.
The face registration algorithm may calculate the key point positions of a plurality of key points representing each part of the face in the face image, where the number of the key points may be a preset fixed value, and may include, for example, 5 points, 68 points, 90 points, and the like, and the number of the key points may be set according to the requirements of practical applications, which is not described herein again.
The face image may be acquired by a local (i.e. the face verification apparatus) camera component, such as a camera, or may be acquired by receiving a face image sent by another device, such as another terminal, and so on.
202. The computer equipment performs characteristic coding on the part image to obtain multilayer characteristic information corresponding to the part image, wherein the multilayer characteristic information comprises output layer characteristic information and intermediate layer characteristic information except the output layer.
For example, in order to improve the accuracy and efficiency of feature extraction, so that the generated part decoration image is more realistic, the computer device may perform feature encoding on the part image through an encoding module of a generator in the generative countermeasure network to obtain multi-layer feature information corresponding to the part image, such as output layer feature information and intermediate layer feature information except for an output layer.
Specifically, the step of "performing feature coding on the part image to obtain multi-layer feature information corresponding to the part image" may include:
performing multilayer convolution processing on the part image to obtain output layer characteristic information corresponding to the part image and intermediate layer characteristic information except the output layer;
then, at this time, the step of "feature decoding the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image" may include:
performing multilayer deconvolution processing on the output layer feature information to obtain multilayer fused feature information corresponding to the position image, wherein each layer of fused feature information is obtained by fusing the feature information output by the adjacent deconvolution layer and the middle layer feature information;
and determining a part modification image corresponding to the part image based on the multi-layer fused feature information.
The multi-layer deconvolution processing is performed on the output layer feature information, and the multi-layer deconvolution processing can be performed on the output layer feature information through a decoding module of a generator in a generating-type countermeasure network, so that multi-layer fused feature information corresponding to the position image is obtained, wherein each layer of fused feature information is obtained by fusing feature information output by adjacent deconvolution layers and intermediate layer feature information.
The encoding module comprises one or more convolution layers and is used for carrying out multilayer convolution operation on an input part image, namely carrying out feature extraction on the part image so as to obtain semantic features corresponding to the part image; the decoding module corresponds to the encoding module and comprises one or more deconvolution layers, and multilayer fused feature information corresponding to the position image is obtained by performing multilayer deconvolution operation on output layer feature information in the extracted multilayer feature information, namely for each deconvolution layer, feature information output by a deconvolution layer adjacent to the deconvolution layer is fused with intermediate layer feature information, so that fused feature information of each layer is obtained.
In the generative countermeasure network, the multi-layer feature information may be expressed as a multi-layer feature image, that is, an output layer feature image corresponding to the output layer, an intermediate layer feature image corresponding to the intermediate layer, and multi-layer fused feature information corresponding to the site image, or may be expressed as a multi-layer fused feature image. Based on the multi-layer fused feature information corresponding to the part image, such as the fused feature image, the part decoration image corresponding to the part image can be determined, for example, the fused feature image finally output by the encoding module can be used as the part decoration image corresponding to the part image.
203. The computer device performs feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part-modified image corresponding to the part image, the part-modified image being an image after face part modification.
The part-modified image is an image obtained by modifying a face part, and may be an image corresponding to the face part after makeup, or may be an image corresponding to the face part after makeup removal (i.e., after makeup removal), that is: if the part image is an image corresponding to a facial part which is removed (made up) from the face, the part modification image is an image corresponding to a facial part which is made up (made up); on the other hand, if the part image corresponds to the modified face part, the part-modified image corresponds to the unmodified face part.
In an embodiment, the generating unit in the generative countermeasure network may be configured to generate a part modification image corresponding to a part image, where the part image includes a de-modified part image, and specifically, the step "performing feature coding on the part image to obtain multi-layer feature information corresponding to the part image" may include:
performing feature coding on the unmodified position image through a coding module of a first preset generator in the first preset generation type countermeasure network to obtain multilayer feature information corresponding to the unmodified position image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image, including:
and performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the first preset generator to generate a modified rear bit image corresponding to the modified part image.
The first preset generative confrontation network may be obtained through training, and specifically, the face verification method may further include:
acquiring a first sample face image, a second sample face image and a sample modified face image;
respectively carrying out image extraction on the sample de-modified face part of the first sample face image and the sample modified face part of the sample modified face image to obtain a sample de-modified part image corresponding to the sample de-modified face part and a sample extracted face image of the first sample face image, wherein the sample modified back part image corresponding to the sample modified face part;
generating a predicted modified posterior bit image corresponding to the sample modified bit image by using a first generator of a first generation countermeasure network;
training the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted face image and the second sample face image to obtain a first preset generative confrontation network.
The first sample face image and the second sample face image may be a modified face image, for example, face images corresponding to a makeup-removed human face; the sample modified face image is a modified face image, and may be, for example, a face image corresponding to a makeup human face, where the face part to be made up and removed may be an eye part.
In one embodiment, the first generative confrontation network may further include a first discriminator, by which the predicted modified posterior bit image and the sample modified posterior bit image may be discriminated, and the first generative confrontation network may be trained based on the discrimination result, and specifically, the step of training the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted face image, and the second sample face image to obtain the first predetermined generative confrontation network may include:
judging the predicted modified rear-bit image and the sample modified rear-bit image by adopting a first discriminator of a first generative countermeasure network to obtain a judgment result;
fusing the predicted modified face image with the face image extracted from the sample to obtain a sample fused modified face image;
matching the modified face image after the sample fusion with a second sample face image to obtain a sample matching result;
and training the first generative confrontation network based on the sample matching result and the judgment result to obtain a first preset generative confrontation network.
The modified face image after the sample fusion is matched with the second sample face image, and a face recognition model can be used for matching, for example, facial features of faces in the two images can be extracted through the face recognition model, the similarity between the two images is calculated based on the facial features, and the two images are matched according to the similarity.
In an embodiment, according to a sample matching result, a similarity loss between the sample fused modified face image and the second sample face image may be obtained, and according to a discrimination result, a discrimination loss between the predicted modified face image and the sample modified face image may be obtained, and specifically, the step of training the first generative confrontation network based on the sample matching result and the discrimination result to obtain the first preset confrontation network may include:
acquiring similarity loss based on the similarity between the modified face image after sample fusion and the second sample face image;
acquiring the discrimination loss of the first discriminator based on the discrimination results of the predicted modified posterior bit image and the sample modified posterior bit image;
and adjusting parameters of the first generative countermeasure network according to the similarity loss and the discrimination loss to obtain a first preset generative countermeasure network.
For example, according to the similarity loss and the discrimination loss, an optimizer such as a random gradient descent algorithm or Adam may be used to jointly optimize the similarity loss and the discrimination loss, and adjust the network parameters of the first generative confrontation network, so as to obtain a trained first preset generative confrontation network.
For example, referring to fig. 2b, where image ID1 is a cosmetic removed face image and image ID2 is also a cosmetic removed face image, the similarity between image ID1 and image ID2 is 0.47. First, image cropping is performed on the eye region in image ID1 and on the eye region in another makeup back face image, respectively, so as to obtain a makeup-removed eye image of image ID1, a cropped face image corresponding to image ID1, and a makeup back eye image corresponding to the makeup back eye region in the makeup back face image (i.e., a real makeup back eye image). The makeup-removed eye image of the image ID1 is used as an input of the GAN model, that is, a generator (such as an encoding module and a decoding module of the generator) in the GAN model generates a predicted makeup eye image corresponding to the input image, and the predicted makeup eye image is discriminated from the real makeup eye image by a discriminator of the GAN model to obtain a discrimination result (true or false). And pasting the predicted eye image after makeup to the cut face image corresponding to the image ID1, performing softening and smoothing treatment on the pasted eye makeup edge by adopting a Poisson fusion algorithm to enable the obtained image ID1 to be more real and natural, finally sending the image ID1 and the image ID2 into a face recognition model together for matching to obtain the similarity between the image ID1 and the image ID2, and finally adjusting (for example, converging) parameters in the GAN model according to the similarity between the image ID1 and the image ID2 and the judgment result of the predicted eye image after makeup and the real eye image after makeup so as to obtain the trained GAN model. The similarity between image ID1 and image ID2 was improved (similarity was 0.57) by generating a post-makeup face image ID1 corresponding to the makeup-removed face image ID1 using the GAN model.
In an embodiment, the step of modifying the partial image further includes performing feature coding on the partial image to obtain multi-layer feature information corresponding to the partial image may include:
performing feature decoding on the modified rear-bit image through a coding module of a second preset generator in a second preset generation type countermeasure network to obtain multilayer feature information corresponding to the modified rear-bit image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image, including:
performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the second preset generator to generate a modified part image corresponding to the modified part image.
The second predetermined generative confrontation network may be obtained through training, and specifically, the face verification method may further include:
acquiring a sample target modified part image corresponding to the modified face part of the sample target in the sample modified face image and a sample target modified part image corresponding to the modified face part of the sample target in the sample face image;
generating a predicted target decoratied partial image corresponding to the sample target decoratied posterior bit image by adopting a second generator of a second generating countermeasure network;
adopting a second discriminator of a second generating countermeasure network to discriminate the predicted target decoratied position image and the sample target decoratied position image to obtain a target discrimination result;
and adjusting the parameters of the second generative countermeasure network according to the target judgment result to obtain a second preset generative countermeasure network.
For example, referring to fig. 2c, where the image ID1 is a made-up face image, the made-up eye image corresponding to the made-up eye region and the cut-up face image corresponding to the image ID1 are obtained by image cutting the made-up eye region in the image ID1, the made-up eye image is used as an input of another GAN model, a predicted made-up eye image corresponding to the made-up eye image is generated by a generator (including an encoding module and a decoding module) in the GAN model, the made-up eye image and a real made-up eye image corresponding to the other made-up eye image are discriminated by a discriminator of the GAN model to obtain a discrimination result, parameters of the GAN model are adjusted according to the discrimination result to obtain another trained GAN model, a final made-up eye image corresponding to the input image is generated by the generator in the trained GAN model, and pasting the finally generated makeup-removed eye image to the cut face image corresponding to the image ID1, and performing softening and smoothing treatment on the pasted makeup edge by adopting a Poisson fusion algorithm so as to enable the finally obtained makeup-removed face image ID1 to be more real and natural.
204. And the computer equipment fuses the part modification image and the extracted face image to obtain a fused modified face image.
The method for fusing the part decoration image and the extracted face image may be various, and in order to make the fused image more real and natural, the fused image may be modified, and specifically, the step "fusing the part decoration image and the extracted face image to obtain the fused face image" may include:
fusing the part modification image and the extracted face image to obtain an initial fused modification face image;
and correcting the initial modified face image after fusion to obtain the modified face image after fusion.
The method for correcting the initially fused modified face image may include a plurality of ways, and the embodiment of the present application may adopt an image fusion algorithm to correct the initially fused modified face image, for example, may adopt a poisson fusion algorithm to soften and smooth an edge where a part modified image in the initially fused modified face image and the extracted face image coincide with each other, and specifically, the step "correcting the initially fused modified face image to obtain the fused modified face image" may include:
calculating a pixel gradient of the part image and a pixel gradient of the extracted face image based on the pixels of the part image and the pixels of the extracted face image;
calculating the pixel gradient of the modified face image after the initial fusion based on the pixel gradient of the part image and the pixel gradient of the extracted face image;
and adjusting the pixels of the initially fused modified face image according to the pixel gradient of the initially fused modified face image and the pixel gradient of the part image to obtain the fused modified face image.
205. The computer device matches the fused modified face image with a face verification image.
The face verification image is a reference image used for verifying whether the fused modified face image of the face image can pass face verification. In the embodiment of the application, the face verification image is used as a standard and a basis for judging whether the face verification can pass, and the fused modified face image is matched with the face verification image to determine whether the fused modified face image can pass the face verification.
In this embodiment of the application, the computer device may match the fused modified face image with the face verification image through the face recognition model, for example, facial features corresponding to faces in the two images may be extracted through the face recognition model, a similarity (such as a cosine similarity) between the two images is calculated based on the extracted facial features, and the fused modified face image is matched with the face verification image through the similarity. Specifically, the step of "matching the fused modified face image with the face verification image" may include:
respectively extracting modified face features corresponding to the modified face in the fused modified face image and verification face features corresponding to the verification face in the face verification image;
calculating the similarity of the fused modified face image and the face verification image based on the modified face features and the verification face features;
and matching the fused modified face image with the face verification image based on the similarity.
206. And the computer equipment determines a face verification result according to the matching result of the fused modified face image and the face verification image.
For example, when the similarity between the fused modified face image and the face verification image is higher than a preset similarity threshold, determining that the fused processed face image is successfully matched with the face verification image, and determining that the face verification passes; and when the similarity between the fused modified face image and the face verification image is lower than a preset similarity threshold, determining that the matching between the fused processed face image and the face verification image fails, and determining that the face verification fails.
As can be seen from the above, the embodiment of the present application may perform image extraction on a face part in a face image to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image. According to the scheme, the part modification image corresponding to the face part in the face image can be generated based on the part image corresponding to the face part in the face image, the part modification image and the extracted face image of the face image are fused to obtain the fused modified face image, the fused modified face image is matched with the face verification image, when the similarity between the fused modified face image and the face verification image is larger than a preset similarity threshold value, the face verification can be determined to be passed, and therefore the passing rate of the face verification is improved. For example, a makeup-removed face image having a high similarity to the face verification image may be generated based on the makeup-removed face image (specifically, the makeup-removed face image may be generated based on the makeup-removed face image).
In order to better implement the method, the embodiment of the present application further provides a face verification apparatus, which may be integrated in a computer device, such as a server or a terminal.
For example, as shown in fig. 3a, the face verification apparatus may include an extraction unit 301, an encoding unit 302, a decoding unit 303, a fusion unit 304, a matching unit 305, a determination unit 306, and the like, as follows:
an extracting unit 301, configured to perform image extraction on a face part in a face image, to obtain a part image corresponding to the face part and an extracted face image of the face image;
an encoding unit 302, configured to perform feature encoding on the part image to obtain multi-layer feature information corresponding to the part image, where the multi-layer feature information includes output layer feature information and intermediate layer feature information except for an output layer;
a decoding unit 303, configured to perform feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, where the part modification image is an image after the face part modification;
a fusion unit 304, configured to fuse the part-modified image and the extracted face image to obtain a fused modified face image;
a matching unit 305, configured to match the fused modified face image with a face verification image;
a determining unit 306, configured to determine a face verification result according to a matching result of the fused modified face image and the face verification image.
In some embodiments, the extracting unit 301 is configured to:
carrying out key point detection on an initial face part in a face image to obtain a part key point position corresponding to the initial face part;
and performing image extraction on the face part in the face image based on the position of the part key point to obtain a part image corresponding to the face part and an extracted face image of the face image.
In some embodiments, referring to fig. 3b, the fusion unit 304 comprises:
a fusion subunit 3041, configured to fuse the part-modified image and the extracted face image to obtain an initial fused modified face image;
a correcting subunit 3042, configured to perform correction processing on the initial post-fusion modified face image, so as to obtain a post-fusion modified face image.
In some embodiments, the correction subunit 3042 is configured to:
calculating a pixel gradient of the part image and a pixel gradient of the extracted face image based on the pixels of the part image and the pixels of the extracted face image;
calculating a pixel gradient of the initial post-fusion modified face image based on the pixel gradient of the part image and the pixel gradient of the extracted face image;
and adjusting the pixels of the initially fused modified face image according to the pixel gradient of the initially fused modified face image and the pixel gradient of the part image to obtain the fused modified face image.
In some embodiments, the matching unit 305 is configured to:
respectively extracting modified face features corresponding to the modified face in the fused modified face image and verification face features corresponding to the verification face in the face verification image;
calculating the similarity of the fused modified face image and a face verification image based on the modified face features and the verification face features;
and matching the fused modified face image with a face verification image based on the similarity.
In some embodiments, the encoding unit 302 is configured to:
performing multilayer convolution processing on the part image to obtain output layer characteristic information corresponding to the part image and intermediate layer characteristic information except the output layer;
the decoding unit 303 is configured to:
performing multilayer deconvolution processing on the output layer feature information to obtain multilayer fused feature information corresponding to the position image, wherein each layer of fused feature information is obtained by fusing the feature information output by the adjacent deconvolution layer and the intermediate layer feature information;
and determining a part modification image corresponding to the part image based on the multi-layer fused feature information.
In some embodiments, the partial image includes a decored partial image, and the encoding unit 302 is configured to:
performing feature coding on the unmodified part image through a coding module of a first preset generator in a first preset generation type countermeasure network to obtain multilayer feature information corresponding to the unmodified part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
the decoding unit 303 is configured to:
and performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the first preset generator to generate a modified bit image corresponding to the modified bit image.
In some embodiments, referring to fig. 3c, the face verification device further comprises a first training unit 307 comprising:
a sample obtaining subunit 3071, configured to obtain a first sample face image, a second sample face image, and a sample decorated face image;
a sample extracting subunit 3072, configured to perform image extraction on the sample de-modified face part of the first sample face image and the sample modified face part of the sample modified face image respectively to obtain a sample de-modified face part image corresponding to the sample de-modified face part and a sample extracted face image of the first sample face image, where the sample modified face part corresponds to the sample modified face part;
a sample generating subunit 3073, configured to generate a predicted modified bit image corresponding to the sample de-modified bit image by using a first generator of a first generative countermeasure network;
a sample training subunit 3074, configured to train the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted posterior face image, and the second sample face image, to obtain a first preset generative confrontation network.
In some embodiments, the first generative confrontation network includes a first discriminator, the sample training subunit 3074, to:
judging the predicted modified part image and the sample modified part image by adopting a first discriminator of the first generative countermeasure network to obtain a judgment result;
fusing the predicted modified facial image with the sample-extracted facial image to obtain a sample-fused modified facial image;
matching the sample fused modified face image with the second sample face image to obtain a sample matching result;
and training the first generative confrontation network based on the sample matching result and the discrimination result to obtain a first preset generative confrontation network.
In some embodiments, the sample training subunit 3074 is configured to:
acquiring similarity loss based on the similarity between the sample fused modified face image and the second sample face image;
acquiring the discrimination loss of the first discriminator based on the discrimination results of the predicted modified bit image and the sample modified bit image;
and adjusting parameters of the first generative countermeasure network according to the similarity loss and the discrimination loss to obtain a first preset generative countermeasure network.
In some embodiments, the sample training subunit 3074 is configured to:
acquiring a first expected probability of the predicted modified bit image and a second expected probability of the sample modified bit image;
respectively calculating a first actual probability of the predicted modified posterior bit image and a second actual probability of the sample modified posterior bit image;
and calculating the discrimination loss of the first discriminator according to the first expected probability, the second expected probability, the first actual probability and the second actual probability.
In some embodiments, the part image further includes a embellished post-part image, and the encoding unit 302 is further configured to:
performing feature decoding on the modified rear-bit image through a coding module of a second preset generator in a second preset generation type countermeasure network to obtain multilayer feature information corresponding to the modified rear-bit image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
the decoding unit 303 is further configured to:
performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the second preset generator to generate a modified part image corresponding to the modified part image.
In some embodiments, referring to fig. 3d, the face verification device further comprises a second training unit 308 for:
acquiring a sample target modified part image corresponding to the modified face part of the sample target in the sample modified face image and a sample target modified part image corresponding to the modified face part of the sample target in the sample face image;
generating a predicted target decoratied partial image corresponding to the sample target decoratied posterior partial image by adopting a second generator of a second generating countermeasure network;
adopting a second discriminator of a second generating countermeasure network to discriminate the predicted target decoratied position image and the sample target decoratied position image to obtain a target discrimination result;
and adjusting parameters of the second generative countermeasure network according to the target judgment result to obtain a second preset generative countermeasure network.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, the face verification apparatus according to the embodiment of the present application may perform image extraction on the face part in the face image by the extraction unit 301, to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image by a coding unit 302 to obtain multi-layer feature information corresponding to the part image, wherein the multi-layer feature information comprises output layer feature information and intermediate layer feature information except for the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information by the decoding unit 303 to generate a part-modified image corresponding to a part image, the part-modified image being an image in which a face part is modified; the part modification image and the extracted face image are fused by a fusion unit 304 to obtain a fused modification face image; matching the fused modified face image with the face verification image by the matching unit 305; the face verification result is determined by the determination unit 306 based on the matching result of the fused modified face image and the face verification image. According to the scheme, the part modification image corresponding to the face part in the face image can be generated based on the part image corresponding to the face part in the face image, the part modification image and the extracted face image of the face image are fused to obtain the fused modified face image, the fused modified face image is matched with the face verification image, when the similarity between the fused modified face image and the face verification image is larger than a preset similarity threshold value, the face verification can be determined to be passed, and therefore the passing rate of the face verification is improved.
The embodiment of the present application further provides a computer device, as shown in fig. 4, which shows a schematic structural diagram of the computer device according to the embodiment of the present application, specifically:
the computer device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 4 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device as a whole. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The computer device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
extracting a face part in the face image to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image.
The above operations can be referred to the previous embodiments specifically, and are not described herein.
As can be seen from the above, the computer device according to the embodiment of the present application may perform image extraction on a face part in a face image to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image. According to the scheme, the part modification image corresponding to the face part in the face image can be generated based on the part image corresponding to the face part in the face image, the part modification image and the extracted face image of the face image are fused to obtain the fused modified face image, the fused modified face image is matched with the face verification image, when the similarity between the fused modified face image and the face verification image is larger than a preset similarity threshold value, the face verification can be determined to be passed, and therefore the passing rate of the face verification is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute the steps in any one of the face verification methods provided by the present application. For example, the computer program may perform the steps of:
extracting a face part in the face image to obtain a part image corresponding to the face part and an extracted face image of the face image; performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer; performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified; fusing the part modification image and the extracted face image to obtain a fused modification face image; matching the fused modified face image with a face verification image; and determining a face verification result according to the matching result of the fused modified face image and the face verification image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any of the face verification methods provided in the embodiments of the present application, the beneficial effects that can be achieved by any of the face verification methods provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described again here.
The face verification method, apparatus, computer device and computer-readable storage medium provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (16)

1. A face verification method, comprising:
carrying out image extraction on a face part in a face image to obtain a part image corresponding to the face part and an extracted face image of the face image;
performing feature coding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, wherein the part modification image is an image obtained by modifying the face part;
fusing the part modification image and the extracted face image to obtain a fused modification face image;
matching the fused modified face image with a face verification image;
and determining a face verification result according to the matching result of the fused modified face image and the face verification image.
2. The method according to claim 1, wherein the image extracting a face part in a face image to obtain a part image corresponding to the face part and an extracted face image of the face image comprises:
carrying out key point detection on an initial face part in a face image to obtain a part key point position corresponding to the initial face part;
and performing image extraction on the face part in the face image based on the position of the part key point to obtain a part image corresponding to the face part and an extracted face image of the face image.
3. The method of claim 1, wherein said fusing the part-modified image with the extracted face image to obtain a fused modified face image comprises:
fusing the part modification image and the extracted face image to obtain an initial fused modified face image;
and correcting the initial modified face image after fusion to obtain the modified face image after fusion.
4. The method according to claim 3, wherein the modifying the initial post-fusion modified face image to obtain a post-fusion modified face image comprises:
calculating a pixel gradient of the part image and a pixel gradient of the extracted face image based on the pixels of the part image and the pixels of the extracted face image;
calculating a pixel gradient of the initial post-fusion modified face image based on the pixel gradient of the part image and the pixel gradient of the extracted face image;
and adjusting the pixels of the initially fused modified face image according to the pixel gradient of the initially fused modified face image and the pixel gradient of the part image to obtain the fused modified face image.
5. The method of claim 1, wherein matching the fused modified face image with a face verification image comprises:
respectively extracting modified face features corresponding to the modified face in the fused modified face image and verification face features corresponding to the verification face in the face verification image;
calculating the similarity of the fused modified face image and a face verification image based on the modified face features and the verification face features;
and matching the fused modified face image with a face verification image based on the similarity.
6. The method according to claim 1, wherein the feature encoding the part image to obtain multi-layer feature information corresponding to the part image comprises:
performing multilayer convolution processing on the part image to obtain output layer characteristic information corresponding to the part image and intermediate layer characteristic information except the output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image, including:
performing multilayer deconvolution processing on the output layer feature information to obtain multilayer fused feature information corresponding to the position image, wherein each layer of fused feature information is obtained by fusing the feature information output by the adjacent deconvolution layer and the intermediate layer feature information;
and determining a part modification image corresponding to the part image based on the multi-layer fused feature information.
7. The method according to claim 1, wherein the part image comprises a de-modified part image, and the feature coding the part image to obtain multi-layer feature information corresponding to the part image comprises:
performing feature coding on the unmodified part image through a coding module of a first preset generator in a first preset generation type countermeasure network to obtain multilayer feature information corresponding to the unmodified part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image, including:
and performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the first preset generator to generate a modified bit image corresponding to the modified bit image.
8. The method of claim 7, further comprising:
acquiring a first sample face image, a second sample face image and a sample modified face image;
respectively carrying out image extraction on the sample de-modified face part of the first sample face image and the sample modified face part of the sample modified face image to obtain a sample de-modified part image corresponding to the sample de-modified face part and a sample extracted face image of the first sample face image, wherein the sample modified back part image corresponding to the sample modified face part;
generating a predicted modified part image corresponding to the sample de-modified part image by using a first generator of a first generation countermeasure network;
training the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted face image and the second sample face image to obtain a first preset generative confrontation network.
9. The method of claim 8, wherein the first generative confrontation network comprises a first discriminator, and wherein training the first generative confrontation network based on the predicted modified posterior bit image, the sample extracted facial image, and the second sample facial image to obtain a first predetermined generative confrontation network comprises:
judging the predicted modified part image and the sample modified part image by adopting a first discriminator of the first generative countermeasure network to obtain a judgment result;
fusing the predicted modified facial image with the sample-extracted facial image to obtain a sample-fused modified facial image;
matching the sample fused modified face image with the second sample face image to obtain a sample matching result;
and training the first generative confrontation network based on the sample matching result and the discrimination result to obtain a first preset generative confrontation network.
10. The method of claim 9, wherein training the first generative confrontation network based on the sample matching result and the discrimination result to obtain a first predetermined generative confrontation network comprises:
acquiring similarity loss based on the similarity between the sample fused modified face image and the second sample face image;
acquiring the discrimination loss of the first discriminator based on the discrimination results of the predicted modified bit image and the sample modified bit image;
and adjusting parameters of the first generative countermeasure network according to the similarity loss and the discrimination loss to obtain a first preset generative countermeasure network.
11. The method according to claim 10, wherein the obtaining a discrimination loss of the first discriminator based on a discrimination result of the predicted modified post-bit image and the sample modified post-bit image comprises:
acquiring a first expected probability of the predicted modified bit image and a second expected probability of the sample modified bit image;
respectively calculating a first actual probability of the predicted modified posterior bit image and a second actual probability of the sample modified posterior bit image;
and calculating the discrimination loss of the first discriminator according to the first expected probability, the second expected probability, the first actual probability and the second actual probability.
12. The method according to claim 1, wherein the bit-map further includes a modified bit-map, and the feature-coding the bit-map to obtain multi-layer feature information corresponding to the bit-map comprises:
performing feature decoding on the modified rear-bit image through a coding module of a second preset generator in a second preset generation type countermeasure network to obtain multilayer feature information corresponding to the modified rear-bit image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except an output layer;
performing feature decoding on the output layer feature information based on the intermediate layer feature information to generate a region modification image corresponding to the region image, including:
performing feature decoding on the output layer feature information based on the intermediate layer feature information through a decoding module of the second preset generator to generate a modified part image corresponding to the modified part image.
13. The method of claim 12, further comprising:
acquiring a sample target modified part image corresponding to the modified face part of the sample target in the sample modified face image and a sample target modified part image corresponding to the modified face part of the sample target in the sample face image;
generating a predicted target decoratied partial image corresponding to the sample target decoratied posterior partial image by adopting a second generator of a second generating countermeasure network;
adopting a second discriminator of a second generating countermeasure network to discriminate the predicted target decoratied position image and the sample target decoratied position image to obtain a target discrimination result;
and adjusting parameters of the second generative countermeasure network according to the target judgment result to obtain a second preset generative countermeasure network.
14. A face authentication apparatus, comprising:
the face image extraction unit is used for extracting a face part in a face image to obtain a part image corresponding to the face part in the face image and an extracted face image of the face image;
the encoding unit is used for carrying out feature encoding on the part image to obtain multilayer feature information corresponding to the part image, wherein the multilayer feature information comprises output layer feature information and intermediate layer feature information except the output layer;
a generation unit configured to perform feature decoding on the output layer feature information based on the intermediate layer feature information to generate a part modification image corresponding to the part image, the part modification image being an image in which the face part is modified;
the fusion unit is used for fusing the part modification image and the extracted face image to obtain a fused modification face image;
the matching unit is used for matching the fused modified face image with a face verification image;
and the determining unit is used for determining the face verification result according to the matching result of the fused modified face image and the face verification image.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1-13 are implemented when the program is executed by the processor.
16. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1-13.
CN202010547280.3A 2020-06-16 2020-06-16 Face verification method, device, equipment and readable storage medium Active CN111598051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010547280.3A CN111598051B (en) 2020-06-16 2020-06-16 Face verification method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010547280.3A CN111598051B (en) 2020-06-16 2020-06-16 Face verification method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111598051A true CN111598051A (en) 2020-08-28
CN111598051B CN111598051B (en) 2023-11-14

Family

ID=72191909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010547280.3A Active CN111598051B (en) 2020-06-16 2020-06-16 Face verification method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111598051B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101296A (en) * 2020-10-14 2020-12-18 杭州海康威视数字技术股份有限公司 Face registration method, face verification method, device and system
CN112669215A (en) * 2021-01-05 2021-04-16 北京金山云网络技术有限公司 Training text image generation model, text image generation method and device
CN113469269A (en) * 2021-07-16 2021-10-01 上海电力大学 Residual convolution self-coding wind-solar-charged scene generation method based on multi-channel fusion
CN114743254A (en) * 2022-06-13 2022-07-12 泽景(西安)汽车电子有限责任公司 Face authentication method and device, terminal equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304390A (en) * 2017-12-15 2018-07-20 腾讯科技(深圳)有限公司 Training method, interpretation method, device based on translation model and storage medium
CN108629168A (en) * 2017-03-23 2018-10-09 三星电子株式会社 Face authentication method, equipment and computing device
CN109635752A (en) * 2018-12-12 2019-04-16 腾讯科技(深圳)有限公司 Localization method, face image processing process and the relevant apparatus of face key point
CN109657583A (en) * 2018-12-10 2019-04-19 腾讯科技(深圳)有限公司 Face's critical point detection method, apparatus, computer equipment and storage medium
CN109815928A (en) * 2019-01-31 2019-05-28 中国电子进出口有限公司 A kind of face image synthesis method and apparatus based on confrontation study
CN110569826A (en) * 2019-09-18 2019-12-13 深圳市捷顺科技实业股份有限公司 Face recognition method, device, equipment and medium
CN110705337A (en) * 2018-07-10 2020-01-17 普天信息技术有限公司 Face recognition method and device aiming at glasses shielding
US20200167550A1 (en) * 2017-07-31 2020-05-28 Tencent Technology (Shenzhen) Company Limited Facial expression synthesis method and apparatus, electronic device, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629168A (en) * 2017-03-23 2018-10-09 三星电子株式会社 Face authentication method, equipment and computing device
US20200167550A1 (en) * 2017-07-31 2020-05-28 Tencent Technology (Shenzhen) Company Limited Facial expression synthesis method and apparatus, electronic device, and storage medium
CN108304390A (en) * 2017-12-15 2018-07-20 腾讯科技(深圳)有限公司 Training method, interpretation method, device based on translation model and storage medium
CN110705337A (en) * 2018-07-10 2020-01-17 普天信息技术有限公司 Face recognition method and device aiming at glasses shielding
CN109657583A (en) * 2018-12-10 2019-04-19 腾讯科技(深圳)有限公司 Face's critical point detection method, apparatus, computer equipment and storage medium
CN109635752A (en) * 2018-12-12 2019-04-16 腾讯科技(深圳)有限公司 Localization method, face image processing process and the relevant apparatus of face key point
CN109815928A (en) * 2019-01-31 2019-05-28 中国电子进出口有限公司 A kind of face image synthesis method and apparatus based on confrontation study
CN110569826A (en) * 2019-09-18 2019-12-13 深圳市捷顺科技实业股份有限公司 Face recognition method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈俊周 等: "基于级联生成对抗网络的人脸图像修复", 电子科技大学学报, vol. 48, no. 06, pages 910 - 917 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101296A (en) * 2020-10-14 2020-12-18 杭州海康威视数字技术股份有限公司 Face registration method, face verification method, device and system
CN112101296B (en) * 2020-10-14 2024-03-08 杭州海康威视数字技术股份有限公司 Face registration method, face verification method, device and system
CN112669215A (en) * 2021-01-05 2021-04-16 北京金山云网络技术有限公司 Training text image generation model, text image generation method and device
CN113469269A (en) * 2021-07-16 2021-10-01 上海电力大学 Residual convolution self-coding wind-solar-charged scene generation method based on multi-channel fusion
CN114743254A (en) * 2022-06-13 2022-07-12 泽景(西安)汽车电子有限责任公司 Face authentication method and device, terminal equipment and storage medium
CN114743254B (en) * 2022-06-13 2022-11-04 泽景(西安)汽车电子有限责任公司 Face authentication method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN111598051B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN111598051B (en) Face verification method, device, equipment and readable storage medium
Yuan et al. Fingerprint liveness detection using an improved CNN with image scale equalization
Peng et al. FD-GAN: Face de-morphing generative adversarial network for restoring accomplice’s facial image
CN107590430A (en) Biopsy method, device, equipment and storage medium
CN111768336B (en) Face image processing method and device, computer equipment and storage medium
CN110555896B (en) Image generation method and device and storage medium
CN110222573A (en) Face identification method, device, computer equipment and storage medium
CN105989263A (en) Method for authenticating identities, method for opening accounts, devices and systems
Shen et al. Effective and robust physical-world attacks on deep learning face recognition systems
CN104036254A (en) Face recognition method
CN111444826A (en) Video detection method and device, storage medium and computer equipment
CN110598019A (en) Repeated image identification method and device
CN114973349A (en) Face image processing method and training method of face image processing model
Baek et al. Generative adversarial ensemble learning for face forensics
CN112528902A (en) Video monitoring dynamic face recognition method and device based on 3D face model
Wang et al. Gender obfuscation through face morphing
CN112749605A (en) Identity recognition method, system and equipment
CN114612991A (en) Conversion method and device for attacking face picture, electronic equipment and storage medium
CN112990123B (en) Image processing method, apparatus, computer device and medium
CN111428670B (en) Face detection method, face detection device, storage medium and equipment
CN115708135A (en) Face recognition model processing method, face recognition method and device
CN115035219A (en) Expression generation method and device and expression generation model training method and device
CN114067394A (en) Face living body detection method and device, electronic equipment and storage medium
CN113378723A (en) Automatic safety identification system for hidden danger of power transmission and transformation line based on depth residual error network
CN114943799A (en) Face image processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028383

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant