CN110070047B - Face comparison method and system, electronic equipment and storage medium - Google Patents

Face comparison method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN110070047B
CN110070047B CN201910329964.3A CN201910329964A CN110070047B CN 110070047 B CN110070047 B CN 110070047B CN 201910329964 A CN201910329964 A CN 201910329964A CN 110070047 B CN110070047 B CN 110070047B
Authority
CN
China
Prior art keywords
face
gender
feature extraction
branch
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910329964.3A
Other languages
Chinese (zh)
Other versions
CN110070047A (en
Inventor
陈鑫
赵明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Intelligent Information Technology Co ltd
Original Assignee
Hangzhou Intelligent Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Intelligent Information Technology Co ltd filed Critical Hangzhou Intelligent Information Technology Co ltd
Priority to CN201910329964.3A priority Critical patent/CN110070047B/en
Publication of CN110070047A publication Critical patent/CN110070047A/en
Application granted granted Critical
Publication of CN110070047B publication Critical patent/CN110070047B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application discloses a face comparison method, a face comparison system, an electronic device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a training set; the training set comprises face pictures with gender marks of males and face pictures with gender marks of females; training the gender branch of the mobilefacenet learning model by using a training set; training a first feature extraction branch in the mobile facenet learning model by using a face picture marked as male gender in the training set; training a second feature extraction branch in the mobilefacenet learning model by using the face picture marked as the female by the gender in the training set; and performing face comparison by using the trained mobilecenet learning model. Therefore, the face comparison method improves the accuracy of face image comparison.

Description

Face comparison method and system, electronic equipment and storage medium
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a face comparison method and system, an electronic device, and a computer-readable storage medium.
Background
In some application scenarios of face recognition, such as an entrance guard scenario, two face pictures need to be compared to determine whether the two face pictures are the same person. In the prior art, a machine learning model (such as an SVM and Adaboost) is used to perform features on two face pictures, and face comparison is performed by comparing the features. In practical application, the accuracy of feature extraction of the scheme is low, so that the accuracy of face comparison is low.
Therefore, how to improve the accuracy of face comparison is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a face comparison method, a face comparison system, an electronic device and a computer readable storage medium, and the accuracy of face comparison is improved.
In order to achieve the above object, the present application provides a face comparison method, including:
acquiring a training set; the training set comprises face pictures with gender marked as male and face pictures with gender marked as female;
training the gender branch of the mobile facenet learning model by using the training set so as to recognize the gender of the input face picture by using the trained gender branch;
training a first feature extraction branch in the mobile facenet learning model by using the face pictures with the gender marked as males in the training set so as to extract male features of input face pictures by using the trained first feature extraction branch;
training a second feature extraction branch in the mobility facenet learning model by using the face pictures with the gender marked as females in the training set so as to extract female features of the input face pictures by using the trained second feature extraction branch;
and performing face comparison by using the trained mobilecenet learning model.
Wherein, the face comparison is performed by using the trained mobility learning model, and the face comparison comprises the following steps:
acquiring a first face picture and a second face picture to be compared;
inputting the first face picture and the second face picture into the trained mobility learning model, and obtaining a first gender identification result corresponding to the first face picture and a second gender identification result corresponding to the second face picture according to the gender branches;
judging whether the first gender identification result is consistent with the second gender identification result;
if so, obtaining a first feature extraction result corresponding to the first face picture and a second feature extraction result corresponding to the second face picture according to the feature extraction branch; the feature extraction branch is one item corresponding to the first gender identification result in the first feature extraction branch and the second feature extraction branch;
judging whether the first feature extraction result is consistent with the second feature extraction result;
and if so, judging that the first face picture and the second face picture contain the same face.
Wherein the training of the gender branch of the mobility facenet learning model using the training set comprises:
training a gender branch of a mobilefacenet learning model by using the training set; wherein, a cross entropy loss function is adopted in the training process.
Wherein the mobilecenet learning model comprises:
a basic convolution layer for performing convolution operation on the input face picture;
the gender branch, the first feature extraction branch, and the second feature extraction branch connected with the base convolutional layer.
Wherein the gender branch comprises:
a first global separable convolutional layer connected with the base convolutional layer;
a fully-connected layer connected to the first global separable convolutional layer.
Wherein the first feature extraction branch comprises:
a second global separable convolutional layer connected to the first global separable convolutional layer;
a first feature dimension reduction layer connected to the second global separable convolutional layer for performing dimension reduction processing on features output by the second global separable convolutional layer;
the second feature extraction branch comprises:
a third global separable convolutional layer connected to the first global separable convolutional layer;
and the second feature dimension reduction layer is connected with the third global separable convolutional layer and is used for carrying out dimension reduction processing on the features output by the third global separable convolutional layer.
Wherein the first feature dimension reduction layer and the second feature dimension reduction layer are used to reduce 512-dimensional features to 128-dimensional features.
In order to achieve the above object, the present application provides a face contrast system, including:
the acquisition module is used for acquiring a training set; the training set comprises face pictures with gender marked as male and face pictures with gender marked as female;
the first training module is used for training the gender branch of the mobile facenet learning model by utilizing the training set so as to recognize the gender of the input face picture by utilizing the trained gender branch;
a second training module, configured to train a first feature extraction branch in the mobile facenet learning model by using the face picture whose gender is marked as male in the training set, so as to extract male features of an input face picture by using the trained first feature extraction branch;
a third training module, configured to train a second feature extraction branch in the mobile facenet learning model by using the face picture whose gender is labeled as female in the training set, so as to extract female features of an input face picture by using the trained second feature extraction branch;
and the comparison module is used for comparing the human face by utilizing the target learning model, the first characteristic extraction branch and the second characteristic extraction branch.
To achieve the above object, the present application provides an electronic device including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the human face comparison method when executing the computer program.
To achieve the above object, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned face comparison method.
According to the scheme, the face comparison method comprises the following steps: acquiring a training set; the training set comprises face pictures with gender marked as male and face pictures with gender marked as female; training the gender branch of the mobile facenet learning model by using the training set so as to recognize the gender of the input face picture by using the trained gender branch; training a first feature extraction branch in the mobile facenet learning model by using the face pictures with the gender marked as males in the training set so as to extract male features of input face pictures by using the trained first feature extraction branch; training a second feature extraction branch in the mobility facenet learning model by using the face pictures with the gender marked as females in the training set so as to extract female features of the input face pictures by using the trained second feature extraction branch; and performing face comparison by using the trained mobilecenet learning model.
According to the face comparison method, the face pictures are compared through the trained mobility learning model. The mobility learning model adopts lightweight and efficient network design and loss function design, and can simultaneously solve the problems of efficiency and accuracy in face recognition. The accuracy of gender identification is improved by adopting the mobility learning model, so that the accuracy of comparing the gender of the two face images is improved. In addition, for the face images with different gender identification results, different feature extraction branches are adopted to extract features, so that the feature extraction accuracy is improved, and the feature comparison accuracy of the two face images is improved. Therefore, the face comparison method improves the accuracy of face image comparison. The application also discloses a face comparison system, an electronic device and a computer readable storage medium, which can also realize the technical effects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a face comparison method according to an exemplary embodiment;
FIG. 2 is a schematic diagram of the structure of the convolution of the bottleeck structure;
FIG. 3 is a detailed flowchart of step S105 in FIG. 1;
FIG. 4 is a flow diagram illustrating another face comparison method in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating a face comparison system in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a face comparison method, which improves the accuracy of face comparison.
Referring to fig. 1, a flowchart of a face comparison method according to an exemplary embodiment is shown, as shown in fig. 1, including:
s101: acquiring a training set; the training set comprises face pictures with gender marked as male and face pictures with gender marked as female;
in this step, a training set for training the deep learning model is obtained, the face image in the training set is the face image cut from the data set by using the mtcnn face detection model, and the face image in the training set is marked with gender.
S102: training the gender branch of the mobile facenet learning model by using the training set so as to recognize the gender of the input face picture by using the trained gender branch;
the mobility learning model herein includes: a basic convolution layer for performing convolution operation on the input face picture; the gender branch, the first feature extraction branch, and the second feature extraction branch connected with the base convolutional layer.
The specific network structure of the basic convolutional layer is shown in table 1:
TABLE 1
Input Operator t c n s
1122×3 conv3×3 - 64 1 2
562×64 depthwise conv3×3 - 64 1 1
562×64 bottleneneck 2 64 5 2
282×64 bottleneneck 4 128 1 2
142×128 bottleneneck 2 128 6 1
142×128 bottleneneck 4 128 1 2
72×128 bottleneneck 2 128 2 1
72×128 conv1×1 - 512 1 1
Wherein, Input represents the size and dimension of the Input feature, Operator represents the operation of each step, t is the parameter used in bottleeck, c is the number of convolution kernels, i.e. the number of channels of the output feature diagram, n is the number of times of repeating the operation of each row, and s is the step length of the convolution or pooling operation.
conv is convolution operation, conv3 × 3 represents convolution operation with convolution kernel of 3 × 3, depthwise represents convolution of depthwise type, and bottleeck represents convolution of bottleeck structure, and the structure is shown in fig. 2. Gdconv (global depthwise convolution) is a global separable convolution. If the dimension h x w of the input feature, the convolution kernel size of the global separable convolution is h x w, and the number of channels is the feature dimension.
The feature extraction layer extracts features of the input face picture by using global separable convolution, replaces global pooling by using the global separable convolution, retains face feature information as much as possible, improves the accuracy of feature extraction, and further improves the accuracy of face recognition.
Gender branching in the mobility learning model may include: a first global separable convolutional layer connected with the base convolutional layer; a fully-connected layer connected to the first global separable convolutional layer, the fully-connected layer output being two classifications. The first global separable convolutional layer is shown in table 2:
TABLE 2
Input Operator t c n s
72×512 linear GDConv7×7 - 512 1 1
In a specific implementation, the gender branch of the mobility learning model is trained using the full data of the training set. Preferably, a cross entropy loss function is adopted in the training process, and due to the light-weight and efficient network design and loss function design of the mobilefacenet, the accuracy and efficiency problems can be simultaneously met.
S103: training a first feature extraction branch in the mobile facenet learning model by using the face pictures with the gender marked as males in the training set so as to extract male features of input face pictures by using the trained first feature extraction branch;
s104: training a second feature extraction branch in the mobility facenet learning model by using the face pictures with the gender marked as females in the training set so as to extract female features of the input face pictures by using the trained second feature extraction branch;
the first feature extraction branch in the mobilecenet learning model may include: a second global separable convolutional layer connected to the first global separable convolutional layer; a first feature dimension reduction layer connected to the second global separable convolutional layer for performing dimension reduction processing on features output by the second global separable convolutional layer;
the second feature extraction branch in the mobilecenet learning model may include: a third global separable convolutional layer connected to the first global separable convolutional layer; and the second feature dimension reduction layer is connected with the third global separable convolutional layer and is used for carrying out dimension reduction processing on the features output by the third global separable convolutional layer.
The second and third global separable convolutional layers are shown in table 3:
TABLE 3
Input Operator t c n s
72×512 linear GDConv7×7 - 512 1 1
Thus, the first and second feature dimension reduction layers may reduce the 512-dimensional features output by table 3 to 128-dimensional features, as shown in table 4:
TABLE 4
Input Operator t c n s
12×512 linear conv1×1 - 128 1 1
In specific implementation, the first feature extraction branch and the second feature extraction branch are respectively trained by using face pictures marked as males and females by gender to obtain trained mobility learning models, and for the face pictures with different gender recognition results, different feature extraction branches are adopted to extract features, so that the accuracy of feature extraction is improved
S105: and performing face comparison by using the trained mobilecenet learning model.
Preferably, as shown in fig. 3, step S105 in the previous embodiment may include:
s51: acquiring a first face picture and a second face picture to be compared;
s52: inputting the first face picture and the second face picture into the trained mobility learning model, and obtaining a first gender identification result corresponding to the first face picture and a second gender identification result corresponding to the second face picture according to the gender branches;
s53: judging whether the first gender identification result is consistent with the second gender identification result; if yes, go to S54; if not, go to S57;
in this embodiment, the gender recognition results corresponding to the first face image and the second face image are obtained according to the gender branch in the mobility facenet learning model, when the gender results of the first face image and the second face image are consistent, the features can be compared subsequently, otherwise, the first face image and the second face image are judged to contain different faces.
S54: obtaining a first feature extraction result corresponding to the first face picture and a second feature extraction result corresponding to the second face picture according to the feature extraction branch; the feature extraction branch is one item corresponding to the first gender identification result in the first feature extraction branch and the second feature extraction branch;
s55: judging whether the first feature extraction result is consistent with the second feature extraction result; if yes, go to S56; if not, go to S57;
in this step, the feature extraction branches corresponding to the first gender identification result or the second gender identification result are used for respectively extracting the features corresponding to the first face picture and the second face picture, if the feature extraction results of the first face picture and the second face picture are consistent, the first face picture and the second face picture are judged to contain the same face, otherwise, the first face picture and the second face picture are judged to contain different faces.
S56: judging that the first face picture and the second face picture contain the same face;
s57: and judging that the first face picture and the second face picture contain different faces.
The embodiment of the application discloses a face comparison method, and compared with the first embodiment, the embodiment further explains and optimizes the technical scheme. Specifically, the method comprises the following steps:
referring to fig. 4, a flowchart of another face comparison method according to an exemplary embodiment is shown, and as shown in fig. 4, the method includes:
s201: acquiring a training set, and performing image preprocessing on each face picture in the training set to obtain a standard face picture corresponding to each face picture;
in this embodiment, before training the mobility learning model by using the face pictures in the training set, image preprocessing is performed on each face picture to obtain a standard face picture. The specific operation of the preprocessing is not limited herein, and the step of image preprocessing the target face picture may include adjusting the size of the target face picture to a target size. For example, the sizes of the face pictures can be the same as 112 × 112. The step of image preprocessing the target face picture may also include identifying a position of human eyes in the target face picture, and correcting the target face picture according to the position of the human eyes, so that a human face in the target face picture is a front face. In a specific implementation, the human face in the target human face picture is corrected to a binocular level according to the positions of human eyes in the target human face picture, that is, the human face is corrected to be a front face.
S202: detecting the position of the feature point of the face region in each face picture in the training set by using a feature point regression model; the face picture is specifically a face picture marked with gender and age;
in this embodiment, after the training set is obtained, the feature point regression model is used to detect the feature point position of the face region in each face picture, so that the subsequent step performs the circle expanding operation according to the feature point position. The feature point regression model here may preferably be dlib 68 feature point regression model.
S203: performing circle expanding processing on each face picture by using a preset circle expanding strategy according to the position of the feature point;
in this step, each face picture is subjected to a preset circle expanding strategy based on the feature point position acquired in the previous step. Specifically, the width of the face image may be expanded to obtain characteristics such as ears of a person, the length of the face image may be expanded to obtain characteristics such as a hairstyle, and the like, and certainly, the width and the length may be expanded at the same time. And according to the position of the expanded face, deducting the face data of the corresponding area from the original image to be used as a face picture of a subsequent training set.
A specific ring expanding treatment method comprises the following steps: determining the distance between two eyes according to the positions of the feature points in each face picture; moving the left boundary of each face picture leftwards by a first distance, and moving the right boundary of each face picture rightwards by the first distance; wherein the first distance is a product of a first ratio and the interocular distance; moving the upper boundary of each face picture upwards by a second distance, and moving the lower boundary of each face picture downwards by the second distance; wherein the second distance is a product of a second ratio and the interocular distance.
In the present embodiment, the binocular pitch is used as a basis for the widening operation. Firstly, determining the interocular distance according to the position of the characteristic point, respectively expanding four boundaries of the face picture outwards to a certain region according to the interocular distance, namely respectively expanding the left boundary and the right boundary leftwards and rightwards by a first distance, wherein the first distance is the product of a first ratio and the interocular distance, respectively expanding the upper boundary and the lower boundary upwards and downwards by a second distance, and the second distance is the product of a second ratio and the interocular distance. The first ratio and the second ratio are not specifically limited, and for example, the first ratio may be 1/4, and the second ratio may be 1/2. The ring expanding operation here specifically includes: the width is expanded to the left and the right by 1/4 interocular distances respectively, and the length is expanded to the up and the down by 1/2 interocular distances respectively.
As a more preferred embodiment, after performing a circle expanding process on each face picture according to the feature point positions by using a preset circle expanding strategy, the method further includes: and adjusting the size of each face picture to be the target size. Because most of the faces after the circle expansion are not square, if the faces are forcibly normalized to be square according to the image preprocessing operation, a certain deformation exists, so that the face pictures can be normalized to be 256 × 192 with a fixed size, and the length-width ratio is about 4: 3.
S204: training a mobility learning model by using the ring-expanded face picture to obtain a trained target learning model;
the step includes training a gender branch, a first feature extraction branch and a second extraction branch in the mobility facenet learning model, and the specific training process can be seen in the first embodiment.
S205: and performing face comparison by using the trained mobilecenet learning model.
Therefore, image preprocessing and circle expanding processing are carried out on each face picture in the training set, the circle expanded face picture comprises additional information such as hairstyle and ear nails, more features can be extracted from the circle expanded face picture, and accuracy of gender identification and feature extraction of the face picture is high based on the learning model obtained through training.
In the following, a face comparison system provided by an embodiment of the present application is introduced, and a face comparison system described below and a face comparison method described above may refer to each other.
Referring to fig. 5, a block diagram of a face comparison system according to an exemplary embodiment is shown, as shown in fig. 5, including:
an obtaining module 501, configured to obtain a training set; the training set comprises face pictures with gender marked as male and face pictures with gender marked as female;
a first training module 502, configured to train a gender branch of the mobility facenet learning model using the training set, so as to perform gender recognition on an input face image using the trained gender branch;
a second training module 503, configured to train the first feature extraction branch in the mobile facenet learning model by using the face pictures with gender marked as male in the training set, so as to extract male features of the input face pictures by using the trained first feature extraction branch;
a third training module 504, configured to train a second feature extraction branch in the mobile facenet learning model by using the face pictures with gender marked as females in the training set, so as to extract female features of the input face pictures by using the trained second feature extraction branch;
and a comparison module 505, configured to perform face comparison by using the target learning model, the first feature extraction branch, and the second feature extraction branch.
The face comparison system provided by the embodiment of the application compares face pictures through the trained mobility learning model. The mobility learning model adopts lightweight and efficient network design and loss function design, and can simultaneously solve the problems of efficiency and accuracy in face recognition. The accuracy of gender identification is improved by adopting the mobility learning model, so that the accuracy of comparing the gender of the two face images is improved. In addition, for the face images with different gender identification results, different feature extraction branches are adopted to extract features, so that the feature extraction accuracy is improved, and the feature comparison accuracy of the two face images is improved. Therefore, the face comparison system provided by the embodiment of the application improves the accuracy of face image comparison.
On the basis of the above embodiment, as a preferred implementation, the comparison module 505 includes:
the device comprises an acquisition unit, a comparison unit and a comparison unit, wherein the acquisition unit is used for acquiring a first face picture and a second face picture to be compared;
an input unit, configured to input the first face picture and the second face picture into the trained mobility learning model, and obtain a first gender identification result corresponding to the first face picture and a second gender identification result corresponding to the second face picture according to the gender branches;
a first judging unit configured to judge whether the first gender identification result and the second gender identification result are consistent; if yes, starting the working process of the extraction unit;
the extraction unit is used for obtaining a first feature extraction result corresponding to the first face picture and a second feature extraction result corresponding to the second face picture according to the feature extraction branch; the feature extraction branch is one item corresponding to the first gender identification result in the first feature extraction branch and the second feature extraction branch;
a first judgment unit configured to judge whether the first feature extraction result and the second feature extraction result are consistent; and if so, judging that the first face picture and the second face picture contain the same face.
On the basis of the foregoing embodiment, as a preferred implementation, the mobility learning model includes:
a basic convolution layer for performing convolution operation on the input face picture;
the gender branch, the first feature extraction branch, and the second feature extraction branch connected with the base convolutional layer.
On the basis of the above embodiment, as a preferred implementation, the gender branch includes:
a first global separable convolutional layer connected with the base convolutional layer;
a fully-connected layer connected to the first global separable convolutional layer.
On the basis of the above embodiment, as a preferred implementation, the first feature extraction branch includes:
a second global separable convolutional layer connected to the first global separable convolutional layer;
a first feature dimension reduction layer connected to the second global separable convolutional layer for performing dimension reduction processing on features output by the second global separable convolutional layer;
the second feature extraction branch comprises:
a third global separable convolutional layer connected to the first global separable convolutional layer;
and the second feature dimension reduction layer is connected with the third global separable convolutional layer and is used for carrying out dimension reduction processing on the features output by the third global separable convolutional layer.
On the basis of the above embodiment, as a preferred implementation, the first feature dimension reduction layer and the second feature dimension reduction layer are used to reduce 512-dimensional features into 128-dimensional features.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present application further provides an electronic device, and referring to fig. 6, a structure diagram of an electronic device 600 provided in an embodiment of the present application may include a processor 11 and a memory 12, as shown in fig. 6. The electronic device 600 may also include one or more of a multimedia component 13, an input/output (I/O) interface 14, and a communication component 15.
The processor 11 is configured to control the overall operation of the electronic device 600, so as to complete all or part of the steps in the above-mentioned face comparison method. The memory 12 is used to store various types of data to support operation at the electronic device 600, such as instructions for any application or method operating on the electronic device 600 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and so forth. The Memory 12 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia component 13 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 12 or transmitted via the communication component 15. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 14 provides an interface between the processor 11 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication module 15 is used for wired or wireless communication between the electronic device 600 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding Communication component 15 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-mentioned face comparison method.
In another exemplary embodiment, a computer readable storage medium including program instructions which, when executed by a processor, implement the steps of the above-described face comparison method is also provided. For example, the computer readable storage medium may be the memory 12 described above comprising program instructions that are executable by the processor 11 of the electronic device 600 to perform the above-described face comparison method.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A face comparison method is characterized by comprising the following steps:
acquiring a training set; the training set comprises face pictures with gender marked as male and face pictures with gender marked as female;
training the gender branch of the mobile facenet learning model by using the training set so as to perform gender identification on the input face picture by using the trained gender branch;
training a first feature extraction branch in the mobile facenet learning model by using the face pictures with the gender marked as males in the training set so as to extract male features of input face pictures by using the trained first feature extraction branch;
training a second feature extraction branch in the mobility facenet learning model by using the face pictures with the gender marked as females in the training set so as to extract female features of the input face pictures by using the trained second feature extraction branch;
and performing face comparison by using the trained mobilecenet learning model.
2. The method for comparing human faces according to claim 1, wherein the comparing human faces by using the trained mobility learning model comprises:
acquiring a first face picture and a second face picture to be compared;
inputting the first face picture and the second face picture into the trained mobility learning model, and obtaining a first gender identification result corresponding to the first face picture and a second gender identification result corresponding to the second face picture according to the gender branches;
judging whether the first gender identification result is consistent with the second gender identification result;
if so, obtaining a first feature extraction result corresponding to the first face picture and a second feature extraction result corresponding to the second face picture according to the feature extraction branch; the feature extraction branch is one item corresponding to the first gender identification result in the first feature extraction branch and the second feature extraction branch;
if not, judging that the first face picture and the second face picture contain different faces;
judging whether the first feature extraction result is consistent with the second feature extraction result;
if so, judging that the first face picture and the second face picture contain the same face;
and if not, judging that the first face picture and the second face picture contain different faces.
3. The method for comparing human faces according to claim 1, wherein said training gender branch of the mobility learning model using the training set comprises:
training a gender branch of a mobilefacenet learning model by using the training set; wherein, a cross entropy loss function is adopted in the training process.
4. The face comparison method according to any one of claims 1 to 3, wherein the mobility learning model comprises:
a basic convolution layer for performing convolution operation on the input face picture;
the gender branch, the first feature extraction branch, and the second feature extraction branch connected with the base convolutional layer.
5. The face comparison method of claim 4, wherein the gender branch comprises:
a first global separable convolutional layer connected with the base convolutional layer;
a fully-connected layer connected to the first global separable convolutional layer.
6. The face comparison method of claim 5, wherein the first feature extraction branch comprises:
a second global separable convolutional layer connected to the first global separable convolutional layer;
a first feature dimension reduction layer connected to the second global separable convolutional layer for performing dimension reduction processing on features output by the second global separable convolutional layer;
the second feature extraction branch comprises:
a third global separable convolutional layer connected to the first global separable convolutional layer;
and the second feature dimension reduction layer is connected with the third global separable convolutional layer and is used for carrying out dimension reduction processing on the features output by the third global separable convolutional layer.
7. The face comparison method of claim 6, wherein the first feature dimension reduction layer and the second feature dimension reduction layer are used for reducing 512-dimensional features into 128-dimensional features.
8. A face comparison system, comprising:
the acquisition module is used for acquiring a training set; the training set comprises face pictures with gender marked as male and face pictures with gender marked as female;
the first training module is used for training the gender branch of the mobile facenet learning model by utilizing the training set so as to carry out gender recognition on the input face picture by utilizing the trained gender branch;
a second training module, configured to train a first feature extraction branch in the mobile facenet learning model by using the face picture whose gender is marked as male in the training set, so as to extract male features of an input face picture by using the trained first feature extraction branch;
a third training module, configured to train a second feature extraction branch in the mobile facenet learning model by using the face picture whose gender is labeled as female in the training set, so as to extract female features of an input face picture by using the trained second feature extraction branch;
and the comparison module is used for comparing the human face by using the trained mobility learning model.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the face comparison method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the face comparison method according to any one of claims 1 to 7.
CN201910329964.3A 2019-04-23 2019-04-23 Face comparison method and system, electronic equipment and storage medium Expired - Fee Related CN110070047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910329964.3A CN110070047B (en) 2019-04-23 2019-04-23 Face comparison method and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910329964.3A CN110070047B (en) 2019-04-23 2019-04-23 Face comparison method and system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110070047A CN110070047A (en) 2019-07-30
CN110070047B true CN110070047B (en) 2021-03-26

Family

ID=67368525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910329964.3A Expired - Fee Related CN110070047B (en) 2019-04-23 2019-04-23 Face comparison method and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110070047B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325173A (en) * 2020-02-28 2020-06-23 腾讯科技(深圳)有限公司 Hair type identification method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
CN109086660A (en) * 2018-06-14 2018-12-25 深圳市博威创盛科技有限公司 Training method, equipment and the storage medium of multi-task learning depth network

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013058060A (en) * 2011-09-08 2013-03-28 Dainippon Printing Co Ltd Person attribute estimation device, person attribute estimation method and program
CN105320948A (en) * 2015-11-19 2016-02-10 北京文安科技发展有限公司 Image based gender identification method, apparatus and system
KR102587254B1 (en) * 2016-10-31 2023-10-13 한국전자통신연구원 Method and apparatus for key generation based on face recognition using cnn and rnn
CN106503669B (en) * 2016-11-02 2019-12-10 重庆中科云丛科技有限公司 Training and recognition method and system based on multitask deep learning network
CN107180234A (en) * 2017-06-01 2017-09-19 四川新网银行股份有限公司 The credit risk forecast method extracted based on expression recognition and face characteristic
CN107844784A (en) * 2017-12-08 2018-03-27 广东美的智能机器人有限公司 Face identification method, device, computer equipment and readable storage medium storing program for executing
CN109165584A (en) * 2018-08-09 2019-01-08 深圳先进技术研究院 A kind of sex character selection method and device for facial image
CN109117800A (en) * 2018-08-20 2019-01-01 钟祥博谦信息科技有限公司 Face gender identification method and system based on convolutional neural networks
CN109460733A (en) * 2018-11-08 2019-03-12 北京智慧眼科技股份有限公司 Recognition of face in-vivo detection method and system based on single camera, storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
CN109086660A (en) * 2018-06-14 2018-12-25 深圳市博威创盛科技有限公司 Training method, equipment and the storage medium of multi-task learning depth network

Also Published As

Publication number Publication date
CN110070047A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
US10235603B2 (en) Method, device and computer-readable medium for sensitive picture recognition
CN108090450B (en) Face recognition method and device
US10650259B2 (en) Human face recognition method and recognition system based on lip movement information and voice information
CN110688951B (en) Image processing method and device, electronic equipment and storage medium
US10616475B2 (en) Photo-taking prompting method and apparatus, an apparatus and non-volatile computer storage medium
CN108681743B (en) Image object recognition method and device and storage medium
CN107832700A (en) A kind of face identification method and system
US20210027484A1 (en) Method and device for joint point detection
CN111415358B (en) Image segmentation method, device, electronic equipment and storage medium
CN111179419B (en) Three-dimensional key point prediction and deep learning model training method, device and equipment
CN108027884B (en) Method, storage medium, server and equipment for monitoring object
CN107506696A (en) Anti-fake processing method and related product
CN110033332A (en) A kind of face identification method, system and electronic equipment and storage medium
CN110569731A (en) face recognition method and device and electronic equipment
US20210342632A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN110765903A (en) Pedestrian re-identification method and device and storage medium
CN110070047B (en) Face comparison method and system, electronic equipment and storage medium
CN106485246B (en) Character identifying method and device
CN110827301A (en) Method and apparatus for processing image
CN110414294B (en) Pedestrian re-identification method and device
CN107992894B (en) Image recognition method, image recognition device and computer-readable storage medium
CN112136140A (en) Method and apparatus for image recognition
CN108288023B (en) Face recognition method and device
CN109685079B (en) Method and device for generating characteristic image category information
WO2020244076A1 (en) Face recognition method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210326

CF01 Termination of patent right due to non-payment of annual fee