CN116416662A - Face authenticity identification method, device, equipment and storage medium - Google Patents

Face authenticity identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN116416662A
CN116416662A CN202211620641.8A CN202211620641A CN116416662A CN 116416662 A CN116416662 A CN 116416662A CN 202211620641 A CN202211620641 A CN 202211620641A CN 116416662 A CN116416662 A CN 116416662A
Authority
CN
China
Prior art keywords
module
network
face
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211620641.8A
Other languages
Chinese (zh)
Inventor
张帆
罗朝彤
吴志强
黄华新
陈晓鸿
邹伟政
罗毅豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202211620641.8A priority Critical patent/CN116416662A/en
Publication of CN116416662A publication Critical patent/CN116416662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Abstract

The application discloses a face authenticity identification method, device, equipment and storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining image information of a face image to be identified, inputting the image information into a face authenticity identification model, generating a color feature image and an edge feature image, wherein the face authenticity identification model comprises a cascade implicit feature extraction layer and an identification layer, the implicit feature extraction layer comprises a cascade first network unit and a cascade second network unit, the first network unit performs detail feature extraction on the color feature image and the edge feature image to obtain a first color feature image and a first edge feature image, the second network unit performs depth feature extraction and pooling on the first color feature image and the first edge feature image to obtain a color feature vector and an edge feature vector, and the identification layer determines a face authenticity identification result according to the color feature vector and the edge feature vector. The first network unit does not carry out pooling operation, so that noise signal energy in the face image is prevented from being reduced, and the identification accuracy is improved.

Description

Face authenticity identification method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of face recognition, and particularly relates to a face authenticity identification method, device, equipment and storage medium.
Background
With the rapid development of artificial intelligence technology, face recognition technology has been widely used. In life, face images are almost available everywhere, and the safety problem brought by the face images is also endless, so that the face authentication method is particularly important for authentication of the faces.
In the existing face authenticity identification method, each spatial feature image is respectively input into a multi-convolution cascade device to obtain a spatial feature vector, so that the authenticity of the face is identified. But the accuracy of the identification result of the method is lower.
Disclosure of Invention
In order to at least solve the above problems, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for authenticating a face.
In a first aspect, an embodiment of the present application provides a face authenticity identification method, which includes: generating a color feature map and an edge feature map, wherein the face authenticity identification model comprises a cascade implicit feature extraction layer and an identification layer, the implicit feature extraction layer comprises a cascade first network unit and a cascade second network unit, the first network unit is used for carrying out detail feature extraction according to the color feature map and the edge feature map to obtain a corresponding first color feature map and a corresponding first edge feature map, the second network unit is used for carrying out depth feature extraction and pooling according to the first color feature map and the first edge feature map to obtain a corresponding color feature vector and an edge feature vector, and the identification layer is used for determining a face authenticity identification result according to the color feature vector and the edge feature vector.
In some embodiments, the implicit feature extraction layer comprises: the device comprises a first network branch and a second network branch, wherein the first network branch is used for carrying out implicit feature extraction according to a color feature map to obtain a color feature vector, and the second network branch is used for carrying out implicit feature extraction according to an edge feature map to obtain an edge feature vector.
In some embodiments, the face authenticity identification model is further connected to a feature fusion layer between the feature extraction layer and the identification layer, and the feature fusion layer is used for fusing the color feature vector and the edge feature vector to obtain fused face features. According to the color feature vector and the edge feature vector, determining the true and false identification result of the human face comprises the following steps: the color feature vector and the edge feature vector are fused through a feature fusion layer, so that fusion face features are obtained; and determining the true and false identification result of the human face according to the fused human face characteristics.
In some embodiments, the first network unit includes a first network subunit and a second network subunit that are cascaded, where the first network subunit is configured to perform detail feature extraction through a group convolution, normalization, and activation function according to the color feature map and the edge feature map, to obtain an initial color feature map and an initial edge feature map; and the second network subunit is used for carrying out residual learning according to the initial color feature map and the initial edge feature map to obtain a corresponding first color feature map and a corresponding first edge feature map.
In some embodiments, the second network sub-unit comprises at least one first network layer comprising a first residual branch and a direct mapping branch of the short connection, the residual branch comprising a cascaded first module, a first dimension-reduction convolution module, a first separable convolution module, a first dimension-increase convolution module, and a second normalization module, the first module comprising a cascaded first set of convolution sub-modules, a first normalization sub-module, and a first activation function sub-module.
In some embodiments, the second network unit includes a third network subunit and a fourth network subunit that are cascaded, where the third network subunit is configured to perform residual learning and average pooling according to the first color feature map and the first edge feature map, and obtain a corresponding second color feature map and a second edge feature map; and the fourth network subunit is used for carrying out depth feature extraction and global pooling according to the second color feature map and the second edge feature map to obtain corresponding color feature vectors and edge feature vectors.
In some embodiments, the third network sub-unit comprises at least one second network layer comprising a short-connected second residual branch and a second branch, the second branch comprising a cascaded first convolution module and a fourth normalization module, the second branch being for matching the size and the number of channels of the feature map determined by the second residual branch, the second residual branch comprising a cascaded second module, a second dimension-reduction convolution module, a second separable convolution module, a second dimension-increase convolution module, a third normalization module, and an average pooling module, the second module comprising a cascaded second set of convolution sub-modules, a second normalization sub-module, and a second activation function sub-module.
In some embodiments, the fourth network sub-unit comprises at least one third network layer comprising a third cascaded module comprising a third set of convolution sub-modules, a third normalization sub-module, and a third activation function sub-module, a third dimension-reduction convolution module, a third separable convolution module, a third dimension-increase convolution module, a fifth normalization module, and a global pooling module.
In a second aspect, an embodiment of the present application provides a face authentication apparatus, including: the device comprises an acquisition module, an input module, a first implicit feature extraction module, a second implicit feature extraction module and an identification module. The acquisition module is used for acquiring the image information of the face image to be identified. The input module is used for inputting image information of an image to be identified into the face authenticity identification model to generate a color feature image and an edge feature image of the image to be identified, wherein the face authenticity identification model comprises a cascade implicit feature extraction layer and an identification layer, and the implicit feature extraction layer comprises a first network unit and a second network unit. The first implicit feature extraction module is used for extracting detail features according to the color feature images and the edge feature images through the first network unit to obtain corresponding first color feature images and first edge feature images. The second implicit feature extraction module is used for carrying out depth feature extraction and pooling according to the first color feature map and the first edge feature map through the second network unit to obtain corresponding color feature vectors and edge feature vectors. The identification module is used for determining the authenticity identification result of the human face according to the color feature vector and the edge feature vector through the identification layer.
In a third aspect, an embodiment of the present application provides a control apparatus, including: a processor and a memory storing computer program instructions which, when executed, implement the control method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a control method as in the first aspect.
In the embodiment of the application, the image information of the face image to be identified is acquired, then the image information of the face image to be identified is input into the face authenticity identification model, the color feature image and the edge feature image of the face image to be identified are generated, the implicit feature extraction is carried out on the color feature image and the edge feature image, the color feature vector and the edge feature vector are obtained, and the face authenticity identification result is determined according to the color feature vector and the edge feature vector. When the face authenticity identification model is used for implicit feature extraction, the first network unit only performs detail feature extraction according to the color feature image and the edge feature image, and no pooling operation is performed, so that noise signal energy reduction in the face image to be identified is effectively avoided, and the accuracy of identification of the face identification model is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
Fig. 1 is a flow chart of a face authentication method according to some embodiments of the present application;
fig. 2 is a schematic network structure diagram of a face authentication model according to some embodiments of the present application;
fig. 3 is a schematic network structure of a face authentication model according to some embodiments of the present application;
fig. 4 is a flow chart of a face authentication method according to some embodiments of the present application;
fig. 5 is a flow chart of a face authentication method according to some embodiments of the present application;
fig. 6 is a schematic network structure diagram of a face authentication model according to some embodiments of the present application;
FIG. 7 is a network architecture diagram of a fourth network layer provided in some embodiments of the present application;
FIG. 8 is a network architecture diagram of a first network layer provided in some embodiments of the present application;
FIG. 9 is a network architecture diagram of a second network layer provided in some embodiments of the present application;
FIG. 10 is a network architecture diagram of a third network layer provided in some embodiments of the present application;
fig. 11 is a schematic structural diagram of a face authentication device according to some embodiments of the present application;
fig. 12 is a schematic structural diagram of a face authentication apparatus according to some embodiments of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
With the rapid development of artificial intelligence technology, face recognition technology has been widely used. In life, face images are almost available everywhere, and the safety problem brought by the face images is also endless, so that the face authentication method is particularly important for authentication of the faces.
The existing face authenticity identification method mainly comprises three methods, wherein the first method is to identify through specific hardware equipment, for example, an infrared sensing camera is arranged to sense a real human body, but the identification is difficult to realize large-scale use of common users due to the fact that special equipment is needed for identification. The second method is to perform authentication by matching the user with specific actions such as blinking, however, the method cannot be performed without perception of the user due to the need of matching the user, and the user experience is poor. The third method is to identify through software algorithm, construct space feature images through re-splicing method with face images, then input each space feature image into multi-convolution cascade device to obtain space feature vector, and identify the authenticity of face. However, the method for identifying by the software algorithm is complex in construction, long in time consumption and low in identification accuracy.
In order to solve the above problems, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for identifying authenticity of a face.
The applicant researches show that the false face image is often distorted compared with the original face image, and the false face can also cause blurring of edge details in the processing process, so that the false face is identified by extracting color features and edge features, focusing on noise signals in the image, capturing distortion in color and variation of the edge details.
The face authenticity identification method provided by the embodiment of the application is described in detail below by means of some embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is one of flowcharts of a face authentication method provided in an embodiment of the present application. As shown in fig. 1, the face authentication method may include the steps of:
s110, acquiring image information of a face image to be identified.
The image information of the face image may include a color image, which may be an image obtained by a camera or a video stream, or may be a single image or a plurality of images.
S120, inputting the image information of the face image to be identified into a face authenticity identification model to generate a color feature image and an edge feature image. The face authenticity identification model comprises a cascade implicit feature extraction layer and an identification layer, wherein the implicit feature extraction layer comprises a first network unit and a second network unit which are cascade.
S130, extracting detail features according to the color feature images and the edge feature images through the first network unit to obtain corresponding first color feature images and first edge feature images.
And S140, carrying out depth feature extraction and pooling according to the first color feature map and the first edge feature map through the second network unit to obtain corresponding color feature vectors and edge feature vectors.
S150, determining the true and false identification result of the human face through the identification layer according to the color feature vector and the edge feature vector.
Fig. 2 is a schematic structural diagram of a face authenticity identification model provided in an embodiment of the present application, as shown in fig. 2, where the face authenticity identification model includes a cascade implicit feature extraction layer and an identification layer, the implicit feature extraction layer is used for performing implicit feature extraction according to a color feature map and an edge feature map, that is, S130 to S140 are specific steps of performing implicit feature extraction through the implicit feature extraction layer, so as to obtain a color feature vector and an edge feature vector, and the identification layer is used for determining a face authenticity identification result according to the color feature vector and the edge feature vector obtained by the implicit feature extraction.
The implicit feature extraction layer comprises a first network unit and a second network unit which are connected in cascade, wherein the first network unit is used for carrying out detail feature extraction according to the color feature images and the edge feature images to obtain corresponding first color feature images and first edge feature images, and the second network unit is used for carrying out depth feature extraction and pooling according to the first color feature images and the first edge feature images to obtain corresponding color feature vectors and edge feature vectors.
The authentication layer may include a cascade of a full-connection layer (fu l l y connected) and a Softmax layer, where each node of the full-connection layer is connected to all nodes of the previous layer, and in this embodiment, the full-connection layer corresponds to a linear classifier.
In the embodiment of the application, the image information of the face image to be identified is acquired, then the image information of the face image to be identified is input into the face authenticity identification model to generate a color feature image and an edge feature image of the face image to be identified, implicit feature extraction is carried out on the color feature image and the edge feature image, a color feature vector and an edge feature vector are obtained, and a face authenticity identification result is determined according to the color feature vector and the edge feature vector. When the face authenticity identification model is subjected to implicit feature extraction, the first network unit only performs detail feature extraction according to the color feature image and the edge feature image, and no pooling operation is performed, so that noise signal energy reduction in the face image to be identified is effectively avoided, and the accuracy of the identification of the face authenticity identification model is improved. In addition, the face authenticity identification method provided by the embodiment of the application does not need to use specific equipment or cooperate with a user to make specific actions, so that the method is easy to use on a large scale and good in user experience.
In some embodiments, as shown in fig. 3, to implement real-time authentication of face authenticity, the implicit feature extraction layer includes a first network branch and a second network branch, where the first network branch is used for performing implicit feature extraction according to a color feature map to obtain a color feature vector, and the second network branch is used for performing implicit feature extraction according to an edge feature map to obtain an edge feature vector.
Implicit feature extraction is performed simultaneously through the first network branch and the second network branch, a color feature vector is obtained through the first network branch, an edge feature vector is obtained through the second network branch, and therefore the speed of feature extraction of the face authenticity identification model is improved, the speed of face authenticity identification is further improved, and real-time identification of the face authenticity is achieved.
The first network branch may include a first network element and a second network element, and the second network branch may also include the first network element and the second network element.
Fig. 4 is a second flowchart of a face authentication method according to an embodiment of the present application, where, as shown in fig. 4, the face authentication method may include the following steps:
s410, acquiring image information of a face image to be identified;
s420, inputting image information of a face image to be identified into a face authenticity identification model to generate a color feature image and an edge feature image;
s430, the first network unit of the first network branch performs detail feature extraction according to the color feature map to obtain a corresponding first color feature map;
s440, the second network unit of the first network branch performs depth feature extraction and pooling according to the first color feature map to obtain a corresponding color feature vector;
s450, the first network unit of the second network branch performs detail feature extraction according to the edge feature map to obtain a corresponding first edge feature map;
s460, the second network unit of the second network branch performs depth feature extraction and pooling according to the first edge feature map to obtain a corresponding edge feature vector;
s470, determining the authenticity identification result of the human face by utilizing the identification layer according to the color feature vector and the edge feature vector.
In some embodiments, in order to accurately identify the authenticity of the face, the face authenticity identification model is further connected to a feature fusion layer between the feature extraction layer and the identification layer, and is used for fusing the color feature vector and the edge feature vector to obtain fused face features. As an example, the feature fusion layer may be a concat layer, and specifically may be a layer that obtains the fused face feature by directly connecting the color feature vector and the edge feature vector.
Based on this, as shown in fig. 5, S150 may further include the steps of:
s151, fusing the color feature vector and the edge feature vector through a feature fusion layer to obtain fused face features;
s152, determining a face authenticity identification result according to the fused face features.
And after the color feature vector and the edge feature vector obtained by the implicit feature extraction are fused to obtain the fused face feature, determining the true and false identification result of the face according to the fused face feature, thereby improving the identification accuracy.
In some embodiments, the first network element may include a first network sub-element and a second network sub-element that are cascaded, where the first network sub-element is configured to perform detail feature extraction through a group convolution, normalization, and activation function according to the color feature map and the edge feature map, to obtain an initial color feature map and an initial edge feature map; and the second network subunit is used for carrying out residual learning according to the initial color feature map and the initial edge feature map to obtain a corresponding first color feature map and a corresponding first edge feature map.
As an example, the first network sub-unit may include at least one fourth network layer, as shown in fig. 6 and 7, and the embodiment shown in fig. 6 is described by taking the example that the first network sub-unit includes two fourth network layers. As shown in fig. 7, the fourth network layer may include a concatenated group convolution module (Conv), which may employ a 3x3Conv, a first normalization module, and an activation function module, which may employ a linear rectification function (Rect i f i ed L i near Un it, reLU).
The color feature map and the edge feature map are subjected to convolution calculation through a group convolution module (Conv), a first normalization module is adopted to normalize a result obtained after the convolution calculation, an activation function module is used for processing by a linear rectification function to obtain an initial color feature map and an initial edge feature map, and the fact that pooling operation is not performed in a fourth network layer is needed, so that noise signal energy reduction in a face image to be identified is avoided, and accuracy of identification of a face identification model is improved.
In some embodiments, as shown in fig. 6 and 8, the second network sub-unit includes at least one first network layer, and the embodiment shown in fig. 6 is described by taking the example that the second network sub-unit includes three first network layers. As shown in fig. 8, the first network layer includes a first residual branch and a direct mapping branch of the short connection. The residual branch comprises a first cascaded module, a first dimension-reducing convolution module, a first separable convolution module, a first dimension-increasing convolution module and a second normalization module, wherein the first dimension-reducing convolution module is used for reducing the number of channels before the first separable convolution module, and the first dimension-increasing convolution module is used for recovering the number of channels after the first separable convolution module. The first module may include a first set of concatenated convolution sub-modules, a first normalization sub-module, and a first activation function sub-module. Because the ReLU is not introduced after short connection in the first network layer, the authentication accuracy of the authenticity of the human face is further improved.
As an example, the first dimension-reduction convolution module and the first dimension-increase convolution module may each include 1x1 Conv, and the first separable convolution module may include 3x3 separable convolutions (DConv) to perform a convolution operation with a step size of 1.
In some embodiments, the second network unit includes a third network subunit and a fourth network subunit that are cascaded, where the third network subunit is configured to perform residual learning and average pooling according to the first color feature map and the first edge feature map, and obtain a corresponding second color feature map and a second edge feature map; and the fourth network subunit is used for carrying out depth feature extraction and global pooling according to the second color feature map and the second edge feature map to obtain corresponding color feature vectors and edge feature vectors.
The third network sub-unit is utilized to carry out residual error learning so as to realize feature extraction, the quantity of parameters is reduced through average pooling, a second color feature image and a second edge feature image are obtained, the fourth network sub-unit is utilized to carry out feature extraction, and the statistical moment of the feature image is calculated to carry out global pooling, so that color feature vectors and edge feature vectors are obtained, the quantity of parameters is further reduced, and the speed of identifying the authenticity of the human face is further improved.
As shown in fig. 4 and 9, in some embodiments, the third network subunit includes at least one second network layer, where the second network layer includes a second residual branch and a second branch of the short connection, the second branch includes a first convolution module and a fourth normalization module that are cascaded, the second branch is used to match a size and a channel number of the feature map determined by the second residual branch, and the second residual branch includes a second module that is cascaded, a second dimension-reduction convolution module, a second separable convolution module, a second dimension-increase convolution module, a third normalization module, and an average pooling module. The second dimension-reducing convolution module is used for reducing the number of channels before the second separable convolution module, and the second dimension-increasing convolution module is used for recovering the number of channels after the second separable convolution module. The second module comprises a cascade second group of convolution sub-modules, a second normalization sub-module and a second activation function sub-module. Because the second network layer does not introduce a ReLU after short connection, the accuracy of the authentication of the human face is further improved.
As an example, the second dimension-reduction convolution module and the second dimension-increase convolution module may each include 1x1 Conv, and the second separable convolution module in the second network layer may be a 3x3 separable convolution (DConv) with a step size of 1. The averaging pooling module may include a 3×3 filter to perform an averaging pooling operation with a step size of 2, and accordingly, to match the size and the number of channels of the feature map output after the averaging pooling operation with a step size of 2, the first convolution module in the second branch may be 1×1 Conv, and halve both the length and the width of the feature map using a convolution operation with a step size of 2.
In some embodiments, as shown in fig. 4 and 10, the fourth network sub-unit includes at least one third network layer, and the embodiment shown in fig. 4 is described by taking the example that the fourth network sub-unit includes one third network layer. As shown in fig. 10, the third network layer includes a third cascaded module, a third dimension-reduction convolution module, a third separable convolution module, a third dimension-raising convolution module, a fifth normalization module, and a global pooling module, where the third module includes a third cascaded group of convolution sub-modules, a third normalization sub-module, and a third activation function sub-module. The statistical moment of the feature map is calculated through the global pooling module, the statistical moment is simplified into a feature vector, the parameter quantity is reduced, and the speed of identifying the authenticity of the human face is further improved.
It should be noted that, the various optional implementations described in the embodiments of the present application may be implemented in combination with each other without collision, or may be implemented separately, which is not limited to the embodiments of the present application.
Based on the face authenticity identification method provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the face authenticity identification device. Please refer to the following examples.
Referring to fig. 11, the face authenticity identification apparatus provided in the embodiment of the present application may include an acquisition module and an input module.
The acquisition module is used for acquiring image information of the face image to be identified. The input module is used for inputting the image information of the image to be identified into the human face authenticity identification model to generate a color feature image and an edge feature image of the image to be identified, carrying out implicit feature extraction on the color feature image and the edge feature image to obtain a color feature vector and an edge feature vector, and determining a human face authenticity identification result according to the color feature vector and the edge feature vector; the face authenticity identification model comprises a cascade implicit feature extraction layer and an identification layer, wherein the implicit feature extraction layer comprises a first network unit and a second network unit, and the first network unit is used for carrying out detail features according to the color feature images and the edge feature images to obtain corresponding first color feature images and first edge feature images; and the second network unit is used for carrying out depth feature extraction and pooling according to the color detail feature map and the edge detail feature map to obtain corresponding color feature vectors and edge feature vectors.
The face authenticity identifying device provided in the embodiment of the present application can implement each step in the method embodiment of fig. 1, and achieve a corresponding technical effect, so that repetition is avoided, and no further description is provided here.
Fig. 12 is a schematic diagram of a hardware structure of face authentication according to an embodiment of the present application.
The authentication device may comprise a processor 10 and a memory 20 storing computer program instructions.
In particular, the processor 10 described above may include a Central Processing Unit (CPU), or a specific integrated circuit (App l icat I on Speci fic I ntegrated Ci rcu it, AS ic), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 20 may include mass storage for data or instructions. By way of example, and not limitation, memory 20 may comprise a hard disk drive (Hard D i sk Dr ive, HDD), floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial bus (Un i versa l Ser i a l Bus, USB) drive, or a combination of two or more of these. The memory 20 may include removable or non-removable (or fixed) media, where appropriate. Memory 20 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 20 is a non-volatile solid-state memory.
The Memory may include Read-On-y Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk storage media devices, optical storage media devices, flash Memory devices, electrical, optical, or other physical/tangible Memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to methods in accordance with aspects of the present disclosure.
The processor 10 reads and executes the computer program instructions stored in the memory 20 to implement any of the face authentication methods of the above embodiments.
In one example, the face authentication device may further include a communication interface 30 and a bus 40. As shown in fig. 9, the processor 10, the memory 20, and the communication interface 30 are connected to each other by a bus 40 and perform communication with each other.
The communication interface 30 is mainly used for implementing communication between each module, device, unit and/or apparatus in the embodiments of the present application.
Bus 40 includes hardware, software, or both, coupling the components of the xx devices to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 40 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
In addition, in combination with the face authentication method in the above embodiment, the embodiment of the application may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the face authentication methods of the above embodiments.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (11)

1. The method for identifying the authenticity of the human face is characterized by comprising the following steps of:
acquiring image information of a face image to be identified;
inputting the image information of the face image to be identified into a face authenticity identification model to generate a color feature image and an edge feature image; the face authenticity identification model comprises a cascade implicit feature extraction layer and an identification layer, wherein the implicit feature extraction layer comprises a first network unit and a second network unit which are cascade;
the first network unit is used for extracting detail features according to the color feature images and the edge feature images to obtain corresponding first color feature images and first edge feature images;
the second network unit is used for carrying out depth feature extraction and pooling according to the first color feature map and the first edge feature map to obtain the corresponding color feature vector and edge feature vector;
the identification layer is used for determining the identification result of the true and false of the human face according to the color feature vector and the edge feature vector.
2. The face authentication method according to claim 1, wherein the implicit feature extraction layer comprises: a first network branch and a second network branch;
the first network branch is used for carrying out implicit feature extraction according to the color feature map to obtain a color feature vector;
and the second network branch is used for carrying out implicit feature extraction according to the edge feature map to obtain an edge feature vector.
3. The face authentication method according to claim 1, wherein the face authentication model further has a feature fusion layer connected between the feature extraction layer and the authentication layer, and the feature fusion layer is configured to fuse the color feature vector and the edge feature vector to obtain a fused face feature; the step of determining the face authenticity identification result according to the color feature vector and the edge feature vector comprises the following steps:
the color feature vector and the edge feature vector are fused through the feature fusion layer, so that fusion face features are obtained;
and determining a face authenticity identification result according to the fused face features.
4. The face authentication method according to claim 1, wherein the first network unit includes a first network subunit and a second network subunit that are cascaded, and the first network subunit is configured to perform detail feature extraction through group convolution, normalization and excitation operations according to the color feature map and the edge feature map, so as to obtain an initial color feature map and an initial edge feature map; and the second network subunit is used for carrying out residual learning according to the initial color feature map and the initial edge feature map to obtain the corresponding first color feature map and first edge feature map.
5. The method for authenticating a face according to claim 4, wherein the second network sub-unit comprises at least one first network layer, the first network layer comprises a first residual branch and a direct mapping branch of a short connection, the residual branch comprises a first cascaded module, a first dimension-reduction convolution module, a first separable convolution module, a first dimension-increase convolution module and a second normalization module, and the first module comprises a first cascaded set of convolution sub-modules, a first normalization sub-module and a first activation function sub-module.
6. The face authentication method according to claim 1, wherein the second network unit includes a third network subunit and a fourth network subunit that are cascaded, and the third network subunit is configured to perform residual learning and average pooling according to the first color feature map and the first edge feature map, so as to obtain a corresponding second color feature map and a second edge feature map; and the fourth network subunit is used for carrying out depth feature extraction and global pooling according to the second color feature map and the second edge feature map to obtain the corresponding color feature vector and the edge feature vector.
7. The face authentication method according to claim 6, wherein the third network sub-unit includes at least one second network layer, the second network layer includes a second residual branch and a second branch that are connected in short, the second branch includes a first convolution module and a fourth normalization module that are cascaded, the second branch is used for matching a size and a channel number of a feature map determined by the second residual branch, the second residual branch includes a second cascaded module, a second dimension-reducing convolution module, a second separable convolution module, a second dimension-increasing convolution module, a third normalization module, and an average pooling module, and the second module includes a second cascaded group of convolution sub-modules, a second normalization sub-module, and a second activation function sub-module.
8. The method according to claim 6, wherein the fourth network sub-unit includes at least one third network layer, the third network layer includes a third cascaded module, a third dimension-reduction convolution module, a third separable convolution module, a third dimension-increase convolution module, a fifth normalization module, and a global pooling module, and the third module includes a third cascaded group of convolution sub-modules, a third normalization sub-module, and a third activation function sub-module.
9. A face authentication apparatus, the apparatus comprising:
the acquisition module is used for acquiring the image information of the face image to be identified;
the input module is used for inputting the image information of the image to be identified into a face authenticity identification model to generate a color feature image and an edge feature image of the image to be identified; the face authenticity identification model comprises a cascade implicit feature extraction layer and an identification layer, wherein the implicit feature extraction layer comprises a first network unit and a second network unit;
the first implicit feature extraction module is used for extracting detail features according to the color feature images and the edge feature images through the first network unit to obtain corresponding first color feature images and first edge feature images;
the second implicit feature extraction module is used for carrying out depth feature extraction and pooling according to the first color feature map and the first edge feature map through the second network unit to obtain the corresponding color feature vector and the corresponding edge feature vector;
and the identification module is used for determining the authenticity identification result of the human face through the identification layer according to the color feature vector and the edge feature vector.
10. A face authentication apparatus, the apparatus comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor;
the processor, when executing the computer program instructions, implements a face authentication method as claimed in any one of claims 1 to 8.
11. A computer readable storage medium having stored thereon computer program instructions which when executed by a processor implement a method of face authentication as claimed in any one of claims 1 to 8.
CN202211620641.8A 2022-12-15 2022-12-15 Face authenticity identification method, device, equipment and storage medium Pending CN116416662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211620641.8A CN116416662A (en) 2022-12-15 2022-12-15 Face authenticity identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211620641.8A CN116416662A (en) 2022-12-15 2022-12-15 Face authenticity identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116416662A true CN116416662A (en) 2023-07-11

Family

ID=87052153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211620641.8A Pending CN116416662A (en) 2022-12-15 2022-12-15 Face authenticity identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116416662A (en)

Similar Documents

Publication Publication Date Title
CN110321761B (en) Behavior identification method, terminal equipment and computer readable storage medium
CN108491764B (en) Video face emotion recognition method, medium and device
CN111597933B (en) Face recognition method and device
US20220067888A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN111067522A (en) Brain addiction structural map assessment method and device
CN112949785B (en) Object detection method, device, equipment and computer storage medium
CN113256595A (en) Map creation method, map creation device, map creation equipment and computer storage medium
CN116416662A (en) Face authenticity identification method, device, equipment and storage medium
CN113283450A (en) Image recognition method, device, equipment and computer storage medium
CN115861659A (en) Object matching method, device, equipment and computer storage medium
CN113313086B (en) Feature vector conversion model processing method, device, server and storage medium
CN115761703A (en) Vehicle recognition model training method, vehicle recognition method and device
CN114241752A (en) Method, device and equipment for prompting field end congestion and computer readable storage medium
CN113780492A (en) Two-dimensional code binarization method, device and equipment and readable storage medium
CN114596599A (en) Face recognition living body detection method, device, equipment and computer storage medium
CN114004974A (en) Method and device for optimizing images shot in low-light environment
CN112508856A (en) Distortion type detection method for mixed distortion image
CN112464922B (en) Human-vehicle weight recognition and model training method, device, equipment and storage medium thereof
CN111861948B (en) Image processing method, device, equipment and computer storage medium
CN111325043A (en) Two-dimensional code analysis method, device, equipment and medium
CN117557934A (en) Target detection method, device, equipment and medium
CN116150697A (en) Abnormal application identification method, device, equipment, storage medium and product
CN112766370A (en) Method, device and equipment for training image enhancement model and storage medium
CN114067426A (en) Hand tracking method, device, equipment and computer storage medium
CN117253131A (en) Quality evaluation method, device, equipment, medium and product of face image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication